Test Report: Docker_Linux_docker_arm64 18706

                    
                      c94ef6ff19ad65e169e276817a1b4f9eee2ec8a0:2024-04-22:34155
                    
                

Test fail (2/342)

Order failed test Duration
30 TestAddons/parallel/Ingress 36.45
371 TestStartStop/group/old-k8s-version/serial/SecondStart 375.34
x
+
TestAddons/parallel/Ingress (36.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-613799 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-613799 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-613799 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a7640c82-6c22-4e74-8336-6791ff97ca08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a7640c82-6c22-4e74-8336-6791ff97ca08] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003971792s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-613799 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.075055413s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-613799 addons disable ingress --alsologtostderr -v=1: (7.733123816s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-613799
helpers_test.go:235: (dbg) docker inspect addons-613799:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4",
	        "Created": "2024-04-22T16:57:45.810840232Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8850,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-22T16:57:46.168046745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4/hosts",
	        "LogPath": "/var/lib/docker/containers/a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4/a0924726eafad31e4634a34a2fb1c0e671325097413e9c868d83abd72ded82b4-json.log",
	        "Name": "/addons-613799",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-613799:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-613799",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2612615ae463d9de0e2847bd7c33f51e82ea20a53642b2fdfdd125d7b3da3cb9-init/diff:/var/lib/docker/overlay2/b1699f4b68a9298b206924fbb5011a78112fb741c2187f99822d61619a4228cf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2612615ae463d9de0e2847bd7c33f51e82ea20a53642b2fdfdd125d7b3da3cb9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2612615ae463d9de0e2847bd7c33f51e82ea20a53642b2fdfdd125d7b3da3cb9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2612615ae463d9de0e2847bd7c33f51e82ea20a53642b2fdfdd125d7b3da3cb9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-613799",
	                "Source": "/var/lib/docker/volumes/addons-613799/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-613799",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-613799",
	                "name.minikube.sigs.k8s.io": "addons-613799",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7dc1d8f6ffc76118ae48996a5965c9ff656e33b73adceb30683f7b31ac315983",
	            "SandboxKey": "/var/run/docker/netns/7dc1d8f6ffc7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-613799": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "64466a61deeb8e3af8e1590cff3254cb1711fe90d3411099c3d994a34d71ee7e",
	                    "EndpointID": "b97413f941decd789673fdbd7ed61ef31840ac16723b5ec0c901bfba45d0622f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-613799",
	                        "a0924726eafa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-613799 -n addons-613799
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-613799 logs -n 25: (1.005522489s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-303600                                                                     | download-only-303600   | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-885518                                                                     | download-only-885518   | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-303600                                                                     | download-only-303600   | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | --download-only -p                                                                          | download-docker-836605 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | download-docker-836605                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-836605                                                                   | download-docker-836605 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-196031   | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | binary-mirror-196031                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38499                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-196031                                                                     | binary-mirror-196031   | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| addons  | disable dashboard -p                                                                        | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-613799                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | addons-613799                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-613799 --wait=true                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                                                                 |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-613799 ip                                                                            | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	| addons  | addons-613799 addons disable                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | -p addons-613799                                                                            |                        |         |         |                     |                     |
	| addons  | addons-613799 addons                                                                        | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-613799 ssh cat                                                                       | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | /opt/local-path-provisioner/pvc-ca4b7ddd-c8b8-43eb-829c-c7146edd24e8_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-613799 addons disable                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-613799 addons                                                                        | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | addons-613799                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | -p addons-613799                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-613799 addons                                                                        | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:00 UTC | 22 Apr 24 17:00 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | addons-613799                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-613799 ssh curl -s                                                                   | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-613799 ip                                                                            | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	| addons  | addons-613799 addons disable                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-613799 addons disable                                                                | addons-613799          | jenkins | v1.33.0 | 22 Apr 24 17:01 UTC | 22 Apr 24 17:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:57:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:57:21.560676    8376 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:57:21.560843    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:21.560855    8376 out.go:304] Setting ErrFile to fd 2...
	I0422 16:57:21.560860    8376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:21.561100    8376 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 16:57:21.561582    8376 out.go:298] Setting JSON to false
	I0422 16:57:21.562392    8376 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2389,"bootTime":1713802653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 16:57:21.562464    8376 start.go:139] virtualization:  
	I0422 16:57:21.565979    8376 out.go:177] * [addons-613799] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 16:57:21.569446    8376 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 16:57:21.569483    8376 notify.go:220] Checking for updates...
	I0422 16:57:21.575173    8376 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:57:21.578245    8376 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 16:57:21.580538    8376 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 16:57:21.583151    8376 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0422 16:57:21.585705    8376 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 16:57:21.588324    8376 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:57:21.606961    8376 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 16:57:21.607071    8376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:57:21.676099    8376 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-22 16:57:21.665845377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:57:21.676215    8376 docker.go:295] overlay module found
	I0422 16:57:21.679561    8376 out.go:177] * Using the docker driver based on user configuration
	I0422 16:57:21.682125    8376 start.go:297] selected driver: docker
	I0422 16:57:21.682143    8376 start.go:901] validating driver "docker" against <nil>
	I0422 16:57:21.682157    8376 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 16:57:21.682856    8376 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:57:21.734036    8376 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-22 16:57:21.725629871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:57:21.734230    8376 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:57:21.734486    8376 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:57:21.737803    8376 out.go:177] * Using Docker driver with root privileges
	I0422 16:57:21.740478    8376 cni.go:84] Creating CNI manager for ""
	I0422 16:57:21.740516    8376 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 16:57:21.740526    8376 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:57:21.740613    8376 start.go:340] cluster config:
	{Name:addons-613799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-613799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:21.745637    8376 out.go:177] * Starting "addons-613799" primary control-plane node in "addons-613799" cluster
	I0422 16:57:21.747908    8376 cache.go:121] Beginning downloading kic base image for docker with docker
	I0422 16:57:21.750676    8376 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0422 16:57:21.753091    8376 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 16:57:21.753096    8376 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0422 16:57:21.753161    8376 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0422 16:57:21.753173    8376 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:21.753271    8376 preload.go:173] Found /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0422 16:57:21.753281    8376 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0422 16:57:21.753612    8376 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/config.json ...
	I0422 16:57:21.753643    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/config.json: {Name:mk422c538a71e64526c4e9f3e6e22584a89df7dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:21.765966    8376 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0422 16:57:21.766091    8376 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0422 16:57:21.766116    8376 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0422 16:57:21.766124    8376 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0422 16:57:21.766132    8376 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0422 16:57:21.766138    8376 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from local cache
	I0422 16:57:38.300992    8376 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e from cached tarball
	I0422 16:57:38.301030    8376 cache.go:194] Successfully downloaded all kic artifacts
	I0422 16:57:38.301058    8376 start.go:360] acquireMachinesLock for addons-613799: {Name:mkf2537390c65ca2afb18d1d1f19220f1dbc8a37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 16:57:38.301168    8376 start.go:364] duration metric: took 89.606µs to acquireMachinesLock for "addons-613799"
	I0422 16:57:38.301198    8376 start.go:93] Provisioning new machine with config: &{Name:addons-613799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-613799 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 16:57:38.301287    8376 start.go:125] createHost starting for "" (driver="docker")
	I0422 16:57:38.303727    8376 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0422 16:57:38.303987    8376 start.go:159] libmachine.API.Create for "addons-613799" (driver="docker")
	I0422 16:57:38.304031    8376 client.go:168] LocalClient.Create starting
	I0422 16:57:38.304163    8376 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem
	I0422 16:57:38.906456    8376 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem
	I0422 16:57:39.191960    8376 cli_runner.go:164] Run: docker network inspect addons-613799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0422 16:57:39.204934    8376 cli_runner.go:211] docker network inspect addons-613799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0422 16:57:39.205033    8376 network_create.go:281] running [docker network inspect addons-613799] to gather additional debugging logs...
	I0422 16:57:39.205054    8376 cli_runner.go:164] Run: docker network inspect addons-613799
	W0422 16:57:39.219280    8376 cli_runner.go:211] docker network inspect addons-613799 returned with exit code 1
	I0422 16:57:39.219311    8376 network_create.go:284] error running [docker network inspect addons-613799]: docker network inspect addons-613799: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-613799 not found
	I0422 16:57:39.219329    8376 network_create.go:286] output of [docker network inspect addons-613799]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-613799 not found
	
	** /stderr **
	I0422 16:57:39.219431    8376 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0422 16:57:39.232933    8376 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400253bd70}
	I0422 16:57:39.232977    8376 network_create.go:124] attempt to create docker network addons-613799 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0422 16:57:39.233037    8376 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-613799 addons-613799
	I0422 16:57:39.292667    8376 network_create.go:108] docker network addons-613799 192.168.49.0/24 created
	I0422 16:57:39.292700    8376 kic.go:121] calculated static IP "192.168.49.2" for the "addons-613799" container
	I0422 16:57:39.292996    8376 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0422 16:57:39.306097    8376 cli_runner.go:164] Run: docker volume create addons-613799 --label name.minikube.sigs.k8s.io=addons-613799 --label created_by.minikube.sigs.k8s.io=true
	I0422 16:57:39.321524    8376 oci.go:103] Successfully created a docker volume addons-613799
	I0422 16:57:39.321616    8376 cli_runner.go:164] Run: docker run --rm --name addons-613799-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-613799 --entrypoint /usr/bin/test -v addons-613799:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib
	I0422 16:57:41.768750    8376 cli_runner.go:217] Completed: docker run --rm --name addons-613799-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-613799 --entrypoint /usr/bin/test -v addons-613799:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -d /var/lib: (2.447096737s)
	I0422 16:57:41.768816    8376 oci.go:107] Successfully prepared a docker volume addons-613799
	I0422 16:57:41.768854    8376 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 16:57:41.768880    8376 kic.go:194] Starting extracting preloaded images to volume ...
	I0422 16:57:41.768964    8376 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-613799:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir
	I0422 16:57:45.741634    8376 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-613799:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e -I lz4 -xf /preloaded.tar -C /extractDir: (3.972631541s)
	I0422 16:57:45.741670    8376 kic.go:203] duration metric: took 3.972786557s to extract preloaded images to volume ...
	W0422 16:57:45.741868    8376 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0422 16:57:45.741993    8376 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0422 16:57:45.797345    8376 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-613799 --name addons-613799 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-613799 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-613799 --network addons-613799 --ip 192.168.49.2 --volume addons-613799:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e
	I0422 16:57:46.178057    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Running}}
	I0422 16:57:46.202550    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:57:46.222456    8376 cli_runner.go:164] Run: docker exec addons-613799 stat /var/lib/dpkg/alternatives/iptables
	I0422 16:57:46.275994    8376 oci.go:144] the created container "addons-613799" has a running status.
	I0422 16:57:46.276033    8376 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa...
	I0422 16:57:46.853983    8376 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0422 16:57:46.871895    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:57:46.893070    8376 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0422 16:57:46.893090    8376 kic_runner.go:114] Args: [docker exec --privileged addons-613799 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0422 16:57:46.957420    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:57:46.982960    8376 machine.go:94] provisionDockerMachine start ...
	I0422 16:57:46.983046    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:47.003520    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:47.003809    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:47.003818    8376 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 16:57:47.148219    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-613799
	
	I0422 16:57:47.148285    8376 ubuntu.go:169] provisioning hostname "addons-613799"
	I0422 16:57:47.148388    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:47.166806    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:47.167036    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:47.167047    8376 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-613799 && echo "addons-613799" | sudo tee /etc/hostname
	I0422 16:57:47.304480    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-613799
	
	I0422 16:57:47.304585    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:47.319988    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:47.320236    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:47.320259    8376 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-613799' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-613799/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-613799' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 16:57:47.440703    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 16:57:47.440727    8376 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18706-2371/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-2371/.minikube}
	I0422 16:57:47.440746    8376 ubuntu.go:177] setting up certificates
	I0422 16:57:47.440810    8376 provision.go:84] configureAuth start
	I0422 16:57:47.440891    8376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-613799
	I0422 16:57:47.455861    8376 provision.go:143] copyHostCerts
	I0422 16:57:47.455947    8376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem (1078 bytes)
	I0422 16:57:47.456069    8376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem (1123 bytes)
	I0422 16:57:47.456135    8376 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem (1675 bytes)
	I0422 16:57:47.456189    8376 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem org=jenkins.addons-613799 san=[127.0.0.1 192.168.49.2 addons-613799 localhost minikube]
	I0422 16:57:47.992242    8376 provision.go:177] copyRemoteCerts
	I0422 16:57:47.992312    8376 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 16:57:47.992364    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:48.014704    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:57:48.113993    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 16:57:48.139904    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0422 16:57:48.165118    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 16:57:48.189550    8376 provision.go:87] duration metric: took 748.718484ms to configureAuth
	I0422 16:57:48.189575    8376 ubuntu.go:193] setting minikube options for container-runtime
	I0422 16:57:48.189761    8376 config.go:182] Loaded profile config "addons-613799": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 16:57:48.189810    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:48.205166    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:48.205412    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:48.205436    8376 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 16:57:48.329520    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0422 16:57:48.329544    8376 ubuntu.go:71] root file system type: overlay
	I0422 16:57:48.329638    8376 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 16:57:48.329706    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:48.348855    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:48.349107    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:48.349189    8376 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 16:57:48.484443    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 16:57:48.484532    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:48.501232    8376 main.go:141] libmachine: Using SSH client type: native
	I0422 16:57:48.501490    8376 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0422 16:57:48.501514    8376 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 16:57:49.256063    8376 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-04-18 16:26:05.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-04-22 16:57:48.479890089 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0422 16:57:49.256096    8376 machine.go:97] duration metric: took 2.273118111s to provisionDockerMachine
	I0422 16:57:49.256108    8376 client.go:171] duration metric: took 10.952067615s to LocalClient.Create
	I0422 16:57:49.256140    8376 start.go:167] duration metric: took 10.952151125s to libmachine.API.Create "addons-613799"
	I0422 16:57:49.256155    8376 start.go:293] postStartSetup for "addons-613799" (driver="docker")
	I0422 16:57:49.256166    8376 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 16:57:49.256242    8376 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 16:57:49.256288    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:49.272169    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:57:49.365948    8376 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 16:57:49.369046    8376 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0422 16:57:49.369081    8376 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0422 16:57:49.369092    8376 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0422 16:57:49.369122    8376 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0422 16:57:49.369136    8376 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/addons for local assets ...
	I0422 16:57:49.369214    8376 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/files for local assets ...
	I0422 16:57:49.369244    8376 start.go:296] duration metric: took 113.083315ms for postStartSetup
	I0422 16:57:49.369556    8376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-613799
	I0422 16:57:49.384138    8376 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/config.json ...
	I0422 16:57:49.384425    8376 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 16:57:49.384475    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:49.398382    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:57:49.485403    8376 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0422 16:57:49.489816    8376 start.go:128] duration metric: took 11.188516004s to createHost
	I0422 16:57:49.489839    8376 start.go:83] releasing machines lock for "addons-613799", held for 11.188657875s
	I0422 16:57:49.489904    8376 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-613799
	I0422 16:57:49.504738    8376 ssh_runner.go:195] Run: cat /version.json
	I0422 16:57:49.504878    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:49.505090    8376 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 16:57:49.505144    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:57:49.526662    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:57:49.530047    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:57:49.725394    8376 ssh_runner.go:195] Run: systemctl --version
	I0422 16:57:49.729587    8376 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 16:57:49.733520    8376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0422 16:57:49.758341    8376 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0422 16:57:49.758451    8376 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 16:57:49.787860    8376 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0422 16:57:49.787921    8376 start.go:494] detecting cgroup driver to use...
	I0422 16:57:49.787968    8376 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 16:57:49.788094    8376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 16:57:49.804043    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 16:57:49.813504    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 16:57:49.823384    8376 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 16:57:49.823553    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 16:57:49.833284    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 16:57:49.843034    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 16:57:49.852319    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 16:57:49.861996    8376 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 16:57:49.871270    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 16:57:49.881396    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 16:57:49.891156    8376 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 16:57:49.901084    8376 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 16:57:49.909690    8376 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 16:57:49.917713    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:50.012539    8376 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 16:57:50.122282    8376 start.go:494] detecting cgroup driver to use...
	I0422 16:57:50.122329    8376 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 16:57:50.122378    8376 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 16:57:50.135617    8376 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0422 16:57:50.135687    8376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 16:57:50.149649    8376 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 16:57:50.177076    8376 ssh_runner.go:195] Run: which cri-dockerd
	I0422 16:57:50.182905    8376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 16:57:50.196906    8376 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 16:57:50.228598    8376 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 16:57:50.339076    8376 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 16:57:50.437819    8376 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 16:57:50.437963    8376 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 16:57:50.458475    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:50.549090    8376 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 16:57:50.803219    8376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0422 16:57:50.815172    8376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 16:57:50.827108    8376 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0422 16:57:50.924809    8376 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0422 16:57:51.015386    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:51.110829    8376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0422 16:57:51.125982    8376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 16:57:51.138447    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:51.232249    8376 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0422 16:57:51.298520    8376 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0422 16:57:51.298651    8376 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0422 16:57:51.305869    8376 start.go:562] Will wait 60s for crictl version
	I0422 16:57:51.305973    8376 ssh_runner.go:195] Run: which crictl
	I0422 16:57:51.309677    8376 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 16:57:51.346588    8376 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0422 16:57:51.346716    8376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 16:57:51.365806    8376 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 16:57:51.390748    8376 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0422 16:57:51.390865    8376 cli_runner.go:164] Run: docker network inspect addons-613799 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0422 16:57:51.404879    8376 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0422 16:57:51.408325    8376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:57:51.418987    8376 kubeadm.go:877] updating cluster {Name:addons-613799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-613799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 16:57:51.419101    8376 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 16:57:51.419162    8376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 16:57:51.434581    8376 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0422 16:57:51.434601    8376 docker.go:615] Images already preloaded, skipping extraction
	I0422 16:57:51.434675    8376 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 16:57:51.449942    8376 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0422 16:57:51.449967    8376 cache_images.go:84] Images are preloaded, skipping loading
	I0422 16:57:51.449979    8376 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.0 docker true true} ...
	I0422 16:57:51.450072    8376 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-613799 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-613799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 16:57:51.450144    8376 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0422 16:57:51.495896    8376 cni.go:84] Creating CNI manager for ""
	I0422 16:57:51.495926    8376 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 16:57:51.495945    8376 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 16:57:51.495965    8376 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-613799 NodeName:addons-613799 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 16:57:51.496113    8376 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-613799"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 16:57:51.496189    8376 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 16:57:51.505139    8376 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 16:57:51.505217    8376 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 16:57:51.514219    8376 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0422 16:57:51.532355    8376 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 16:57:51.550395    8376 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0422 16:57:51.567885    8376 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0422 16:57:51.571300    8376 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 16:57:51.581868    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:57:51.679207    8376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:57:51.694759    8376 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799 for IP: 192.168.49.2
	I0422 16:57:51.694782    8376 certs.go:194] generating shared ca certs ...
	I0422 16:57:51.694797    8376 certs.go:226] acquiring lock for ca certs: {Name:mkc0c6170c42b1b43b7f622fcbfe2e475bd8761f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:51.694921    8376 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key
	I0422 16:57:52.198323    8376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt ...
	I0422 16:57:52.198359    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt: {Name:mk43f7045a923a388c20855d9b7109ea9711bd71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:52.198579    8376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key ...
	I0422 16:57:52.198593    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key: {Name:mk05a7056c2fa1530454b312d27e793d019e6047 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:52.198689    8376 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key
	I0422 16:57:53.088601    8376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.crt ...
	I0422 16:57:53.088637    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.crt: {Name:mkf9966532cf1d4a8fc9f984aadfac54f19761e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.088832    8376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key ...
	I0422 16:57:53.088855    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key: {Name:mk9d8fa8fa433085f64af0e5f74d4550c5534357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.088937    8376 certs.go:256] generating profile certs ...
	I0422 16:57:53.088995    8376 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.key
	I0422 16:57:53.089012    8376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt with IP's: []
	I0422 16:57:53.321138    8376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt ...
	I0422 16:57:53.321172    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: {Name:mk9c4679bcb9a8c762c32ced79e908c2202dd933 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.321355    8376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.key ...
	I0422 16:57:53.321368    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.key: {Name:mk6f7f6f308a5a0a37a4a19831325fa9c31281d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.321455    8376 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key.5c7d368a
	I0422 16:57:53.321478    8376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt.5c7d368a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0422 16:57:53.670544    8376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt.5c7d368a ...
	I0422 16:57:53.670575    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt.5c7d368a: {Name:mk98567b3ba7c525547550974738efa724adad58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.670790    8376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key.5c7d368a ...
	I0422 16:57:53.670805    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key.5c7d368a: {Name:mkc938aba1dbd32428b2059699ff36d807b2cd5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:53.670899    8376 certs.go:381] copying /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt.5c7d368a -> /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt
	I0422 16:57:53.670991    8376 certs.go:385] copying /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key.5c7d368a -> /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key
	I0422 16:57:53.671050    8376 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.key
	I0422 16:57:53.671070    8376 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.crt with IP's: []
	I0422 16:57:54.219187    8376 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.crt ...
	I0422 16:57:54.219217    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.crt: {Name:mk13ecc99b57296351f6d4c3fb40793dbc3fcf22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:54.219396    8376 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.key ...
	I0422 16:57:54.219408    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.key: {Name:mk0d96e1595af43f4fa1f4e0f19f8574fbc8e363 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:57:54.219586    8376 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 16:57:54.219625    8376 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem (1078 bytes)
	I0422 16:57:54.219657    8376 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem (1123 bytes)
	I0422 16:57:54.219688    8376 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem (1675 bytes)
	I0422 16:57:54.220304    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 16:57:54.245474    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 16:57:54.269475    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 16:57:54.292096    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 16:57:54.316050    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0422 16:57:54.339644    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 16:57:54.363357    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 16:57:54.389766    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0422 16:57:54.415031    8376 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 16:57:54.439042    8376 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 16:57:54.457084    8376 ssh_runner.go:195] Run: openssl version
	I0422 16:57:54.462704    8376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 16:57:54.471913    8376 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:57:54.475226    8376 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:57:54.475318    8376 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 16:57:54.482249    8376 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 16:57:54.491781    8376 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 16:57:54.494954    8376 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0422 16:57:54.495002    8376 kubeadm.go:391] StartCluster: {Name:addons-613799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-613799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:54.495133    8376 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 16:57:54.509640    8376 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0422 16:57:54.518785    8376 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0422 16:57:54.527401    8376 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0422 16:57:54.527471    8376 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0422 16:57:54.536273    8376 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0422 16:57:54.536293    8376 kubeadm.go:156] found existing configuration files:
	
	I0422 16:57:54.536350    8376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0422 16:57:54.545180    8376 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0422 16:57:54.545250    8376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0422 16:57:54.553867    8376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0422 16:57:54.562603    8376 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0422 16:57:54.562667    8376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0422 16:57:54.570981    8376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0422 16:57:54.579735    8376 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0422 16:57:54.579798    8376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0422 16:57:54.588132    8376 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0422 16:57:54.596622    8376 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0422 16:57:54.596716    8376 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0422 16:57:54.605082    8376 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0422 16:57:54.708562    8376 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1058-aws\n", err: exit status 1
	I0422 16:57:54.780982    8376 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0422 16:58:10.480609    8376 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0422 16:58:10.480667    8376 kubeadm.go:309] [preflight] Running pre-flight checks
	I0422 16:58:10.480751    8376 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0422 16:58:10.480834    8376 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1058-aws
	I0422 16:58:10.480873    8376 kubeadm.go:309] OS: Linux
	I0422 16:58:10.480922    8376 kubeadm.go:309] CGROUPS_CPU: enabled
	I0422 16:58:10.480972    8376 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0422 16:58:10.481021    8376 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0422 16:58:10.481071    8376 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0422 16:58:10.481120    8376 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0422 16:58:10.481169    8376 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0422 16:58:10.481216    8376 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0422 16:58:10.481265    8376 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0422 16:58:10.481323    8376 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0422 16:58:10.481395    8376 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0422 16:58:10.481496    8376 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0422 16:58:10.481588    8376 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0422 16:58:10.481651    8376 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0422 16:58:10.484581    8376 out.go:204]   - Generating certificates and keys ...
	I0422 16:58:10.484684    8376 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0422 16:58:10.484801    8376 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0422 16:58:10.484886    8376 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0422 16:58:10.484945    8376 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0422 16:58:10.485017    8376 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0422 16:58:10.485076    8376 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0422 16:58:10.485135    8376 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0422 16:58:10.485269    8376 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-613799 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0422 16:58:10.485345    8376 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0422 16:58:10.485485    8376 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-613799 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0422 16:58:10.485564    8376 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0422 16:58:10.485647    8376 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0422 16:58:10.485711    8376 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0422 16:58:10.485789    8376 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0422 16:58:10.485858    8376 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0422 16:58:10.485923    8376 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0422 16:58:10.485986    8376 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0422 16:58:10.486080    8376 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0422 16:58:10.486151    8376 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0422 16:58:10.486243    8376 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0422 16:58:10.486316    8376 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0422 16:58:10.491138    8376 out.go:204]   - Booting up control plane ...
	I0422 16:58:10.491299    8376 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0422 16:58:10.491414    8376 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0422 16:58:10.491491    8376 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0422 16:58:10.491610    8376 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0422 16:58:10.491719    8376 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0422 16:58:10.491766    8376 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0422 16:58:10.491891    8376 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0422 16:58:10.491961    8376 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0422 16:58:10.492018    8376 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001931274s
	I0422 16:58:10.492086    8376 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0422 16:58:10.492142    8376 kubeadm.go:309] [api-check] The API server is healthy after 6.001482481s
	I0422 16:58:10.492244    8376 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0422 16:58:10.492363    8376 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0422 16:58:10.492419    8376 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0422 16:58:10.492598    8376 kubeadm.go:309] [mark-control-plane] Marking the node addons-613799 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0422 16:58:10.492653    8376 kubeadm.go:309] [bootstrap-token] Using token: ujmn9q.jrcs4nm6itp9swei
	I0422 16:58:10.495497    8376 out.go:204]   - Configuring RBAC rules ...
	I0422 16:58:10.495632    8376 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0422 16:58:10.495728    8376 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0422 16:58:10.495877    8376 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0422 16:58:10.496019    8376 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0422 16:58:10.496141    8376 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0422 16:58:10.496233    8376 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0422 16:58:10.496358    8376 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0422 16:58:10.496415    8376 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0422 16:58:10.496472    8376 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0422 16:58:10.496482    8376 kubeadm.go:309] 
	I0422 16:58:10.496587    8376 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0422 16:58:10.496606    8376 kubeadm.go:309] 
	I0422 16:58:10.496686    8376 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0422 16:58:10.496692    8376 kubeadm.go:309] 
	I0422 16:58:10.496719    8376 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0422 16:58:10.496939    8376 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0422 16:58:10.497000    8376 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0422 16:58:10.497008    8376 kubeadm.go:309] 
	I0422 16:58:10.497064    8376 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0422 16:58:10.497071    8376 kubeadm.go:309] 
	I0422 16:58:10.497121    8376 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0422 16:58:10.497129    8376 kubeadm.go:309] 
	I0422 16:58:10.497183    8376 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0422 16:58:10.497263    8376 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0422 16:58:10.497337    8376 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0422 16:58:10.497345    8376 kubeadm.go:309] 
	I0422 16:58:10.497433    8376 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0422 16:58:10.497515    8376 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0422 16:58:10.497523    8376 kubeadm.go:309] 
	I0422 16:58:10.497611    8376 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token ujmn9q.jrcs4nm6itp9swei \
	I0422 16:58:10.497720    8376 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:cfbab1b2efd4ae189cfb7d393b2822919e7d26bba267c5e0f49c7df9703fd236 \
	I0422 16:58:10.497744    8376 kubeadm.go:309] 	--control-plane 
	I0422 16:58:10.497752    8376 kubeadm.go:309] 
	I0422 16:58:10.497840    8376 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0422 16:58:10.497848    8376 kubeadm.go:309] 
	I0422 16:58:10.497933    8376 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token ujmn9q.jrcs4nm6itp9swei \
	I0422 16:58:10.498054    8376 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:cfbab1b2efd4ae189cfb7d393b2822919e7d26bba267c5e0f49c7df9703fd236 
	I0422 16:58:10.498069    8376 cni.go:84] Creating CNI manager for ""
	I0422 16:58:10.498085    8376 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 16:58:10.502488    8376 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0422 16:58:10.505718    8376 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0422 16:58:10.514914    8376 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0422 16:58:10.534747    8376 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0422 16:58:10.534834    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:10.534929    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-613799 minikube.k8s.io/updated_at=2024_04_22T16_58_10_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a minikube.k8s.io/name=addons-613799 minikube.k8s.io/primary=true
	I0422 16:58:10.696408    8376 ops.go:34] apiserver oom_adj: -16
	I0422 16:58:10.696431    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:11.197085    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:11.697409    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:12.197428    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:12.696884    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:13.196876    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:13.696560    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:14.197427    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:14.697018    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:15.197264    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:15.697085    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.196701    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:16.696532    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.196607    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:17.696563    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.196569    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:18.697056    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.197092    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:19.696577    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.196853    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:20.697166    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.196542    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:21.697145    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.197079    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.697492    8376 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0422 16:58:22.806678    8376 kubeadm.go:1107] duration metric: took 12.271924583s to wait for elevateKubeSystemPrivileges
	W0422 16:58:22.806715    8376 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0422 16:58:22.806724    8376 kubeadm.go:393] duration metric: took 28.311726747s to StartCluster
	I0422 16:58:22.806739    8376 settings.go:142] acquiring lock: {Name:mk4d4aae5dac6b45b6276ad1e8e6929d4ff7540f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:22.806867    8376 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 16:58:22.807260    8376 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/kubeconfig: {Name:mkd3bbb31387c9740f072dd59bcca857246cca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 16:58:22.807455    8376 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 16:58:22.810112    8376 out.go:177] * Verifying Kubernetes components...
	I0422 16:58:22.807582    8376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0422 16:58:22.807741    8376 config.go:182] Loaded profile config "addons-613799": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 16:58:22.807749    8376 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0422 16:58:22.812182    8376 addons.go:69] Setting yakd=true in profile "addons-613799"
	I0422 16:58:22.812217    8376 addons.go:234] Setting addon yakd=true in "addons-613799"
	I0422 16:58:22.812249    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.812745    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.812943    8376 addons.go:69] Setting ingress=true in profile "addons-613799"
	I0422 16:58:22.812968    8376 addons.go:234] Setting addon ingress=true in "addons-613799"
	I0422 16:58:22.813000    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.813389    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.813689    8376 addons.go:69] Setting ingress-dns=true in profile "addons-613799"
	I0422 16:58:22.813712    8376 addons.go:234] Setting addon ingress-dns=true in "addons-613799"
	I0422 16:58:22.813736    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.814090    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.814303    8376 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 16:58:22.814629    8376 addons.go:69] Setting cloud-spanner=true in profile "addons-613799"
	I0422 16:58:22.814651    8376 addons.go:234] Setting addon cloud-spanner=true in "addons-613799"
	I0422 16:58:22.814678    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.815020    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.817319    8376 addons.go:69] Setting inspektor-gadget=true in profile "addons-613799"
	I0422 16:58:22.817350    8376 addons.go:234] Setting addon inspektor-gadget=true in "addons-613799"
	I0422 16:58:22.817382    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.817769    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.817936    8376 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-613799"
	I0422 16:58:22.817976    8376 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-613799"
	I0422 16:58:22.818005    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.818357    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.825029    8376 addons.go:69] Setting metrics-server=true in profile "addons-613799"
	I0422 16:58:22.825071    8376 addons.go:234] Setting addon metrics-server=true in "addons-613799"
	I0422 16:58:22.825112    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.825535    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.838061    8376 addons.go:69] Setting default-storageclass=true in profile "addons-613799"
	I0422 16:58:22.838109    8376 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-613799"
	I0422 16:58:22.838411    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.841388    8376 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-613799"
	I0422 16:58:22.841434    8376 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-613799"
	I0422 16:58:22.841481    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.841997    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.856696    8376 addons.go:69] Setting gcp-auth=true in profile "addons-613799"
	I0422 16:58:22.856747    8376 mustload.go:65] Loading cluster: addons-613799
	I0422 16:58:22.856954    8376 addons.go:69] Setting registry=true in profile "addons-613799"
	I0422 16:58:22.856979    8376 addons.go:234] Setting addon registry=true in "addons-613799"
	I0422 16:58:22.857028    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.857436    8376 config.go:182] Loaded profile config "addons-613799": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 16:58:22.857744    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.857839    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.868664    8376 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0422 16:58:22.871826    8376 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:22.871855    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0422 16:58:22.871921    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:22.927643    8376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:22.931291    8376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0422 16:58:22.933443    8376 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0422 16:58:22.886564    8376 addons.go:69] Setting storage-provisioner=true in profile "addons-613799"
	I0422 16:58:22.933446    8376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:22.886573    8376 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-613799"
	I0422 16:58:22.886578    8376 addons.go:69] Setting volumesnapshots=true in profile "addons-613799"
	I0422 16:58:22.940622    8376 addons.go:234] Setting addon storage-provisioner=true in "addons-613799"
	I0422 16:58:22.940721    8376 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:22.940731    8376 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0422 16:58:22.940755    8376 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-613799"
	I0422 16:58:22.940830    8376 addons.go:234] Setting addon volumesnapshots=true in "addons-613799"
	I0422 16:58:22.942663    8376 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0422 16:58:22.942678    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0422 16:58:22.942684    8376 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0422 16:58:22.942721    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.943002    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.951985    8376 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:22.952021    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:22.954208    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:22.954230    8376 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0422 16:58:22.954658    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.960998    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0422 16:58:22.961057    8376 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0422 16:58:22.961642    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:22.963130    8376 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 16:58:22.963166    8376 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0422 16:58:22.973108    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0422 16:58:22.973189    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:22.973199    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0422 16:58:22.991685    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.009687    8376 addons.go:234] Setting addon default-storageclass=true in "addons-613799"
	I0422 16:58:23.009731    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:23.010134    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:23.025027    8376 out.go:177]   - Using image docker.io/registry:2.8.3
	I0422 16:58:23.028667    8376 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0422 16:58:23.028685    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0422 16:58:23.028753    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.011682    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 16:58:23.038210    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.011693    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0422 16:58:23.053212    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.065071    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:23.070911    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0422 16:58:23.072148    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0422 16:58:23.083566    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0422 16:58:23.088169    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0422 16:58:23.091121    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0422 16:58:23.024983    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.100133    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0422 16:58:23.102437    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0422 16:58:23.104926    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0422 16:58:23.104947    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0422 16:58:23.105019    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.147450    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.100221    8376 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0422 16:58:23.154249    8376 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:23.154276    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0422 16:58:23.154358    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.182830    8376 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0422 16:58:23.185121    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0422 16:58:23.185148    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0422 16:58:23.185241    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.193102    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.196250    8376 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-613799"
	I0422 16:58:23.196291    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:23.196686    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:23.204831    8376 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 16:58:23.202812    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.203606    8376 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:23.207080    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 16:58:23.207159    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.208460    8376 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:23.208477    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 16:58:23.208533    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.234294    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.241432    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.244562    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.288951    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.290399    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.312099    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.315570    8376 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0422 16:58:23.317834    8376 out.go:177]   - Using image docker.io/busybox:stable
	I0422 16:58:23.320886    8376 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:23.320905    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0422 16:58:23.320974    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:23.330399    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.342622    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.358119    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:23.448540    8376 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0422 16:58:23.448720    8376 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 16:58:23.709857    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0422 16:58:23.709879    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0422 16:58:23.736064    8376 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0422 16:58:23.736086    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0422 16:58:23.832257    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0422 16:58:23.848421    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0422 16:58:23.859232    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0422 16:58:23.870507    8376 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0422 16:58:23.870528    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0422 16:58:23.910151    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0422 16:58:23.920298    8376 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0422 16:58:23.920368    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0422 16:58:23.956394    8376 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0422 16:58:23.956466    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0422 16:58:23.957115    8376 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 16:58:23.957157    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0422 16:58:23.965568    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0422 16:58:23.965638    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0422 16:58:24.134769    8376 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0422 16:58:24.134791    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0422 16:58:24.179695    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0422 16:58:24.193554    8376 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0422 16:58:24.193627    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0422 16:58:24.237461    8376 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:24.237530    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0422 16:58:24.244294    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 16:58:24.248653    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 16:58:24.263811    8376 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0422 16:58:24.263872    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0422 16:58:24.367666    8376 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 16:58:24.367751    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 16:58:24.460887    8376 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:24.460961    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 16:58:24.575996    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0422 16:58:24.576068    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0422 16:58:24.597239    8376 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0422 16:58:24.597311    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0422 16:58:24.663555    8376 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0422 16:58:24.663627    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0422 16:58:24.801324    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0422 16:58:24.890450    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 16:58:25.037300    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0422 16:58:25.037379    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0422 16:58:25.309558    8376 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0422 16:58:25.309620    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0422 16:58:25.453684    8376 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0422 16:58:25.453755    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0422 16:58:25.540790    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0422 16:58:25.540862    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0422 16:58:25.614704    8376 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0422 16:58:25.614776    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0422 16:58:25.673620    8376 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0422 16:58:25.673690    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0422 16:58:25.797072    8376 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:25.797141    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0422 16:58:25.969579    8376 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:25.969651    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0422 16:58:25.982149    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0422 16:58:25.982220    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0422 16:58:26.013575    8376 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0422 16:58:26.013650    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0422 16:58:26.053706    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0422 16:58:26.280077    8376 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.831319656s)
	I0422 16:58:26.280230    8376 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.831423129s)
	I0422 16:58:26.280263    8376 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0422 16:58:26.281892    8376 node_ready.go:35] waiting up to 6m0s for node "addons-613799" to be "Ready" ...
	I0422 16:58:26.286014    8376 node_ready.go:49] node "addons-613799" has status "Ready":"True"
	I0422 16:58:26.286078    8376 node_ready.go:38] duration metric: took 3.755206ms for node "addons-613799" to be "Ready" ...
	I0422 16:58:26.286115    8376 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:58:26.301690    8376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-49264" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.327328    8376 pod_ready.go:92] pod "coredns-7db6d8ff4d-49264" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:26.327400    8376 pod_ready.go:81] duration metric: took 25.67381ms for pod "coredns-7db6d8ff4d-49264" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.327427    8376 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6z22k" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.338371    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:26.342170    8376 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:26.342243    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0422 16:58:26.370898    8376 pod_ready.go:92] pod "coredns-7db6d8ff4d-6z22k" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:26.370926    8376 pod_ready.go:81] duration metric: took 43.479375ms for pod "coredns-7db6d8ff4d-6z22k" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.370939    8376 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.401531    8376 pod_ready.go:92] pod "etcd-addons-613799" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:26.401556    8376 pod_ready.go:81] duration metric: took 30.609121ms for pod "etcd-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.401568    8376 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.401789    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0422 16:58:26.401806    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0422 16:58:26.428031    8376 pod_ready.go:92] pod "kube-apiserver-addons-613799" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:26.428058    8376 pod_ready.go:81] duration metric: took 26.481805ms for pod "kube-apiserver-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.428070    8376 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.685979    8376 pod_ready.go:92] pod "kube-controller-manager-addons-613799" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:26.686004    8376 pod_ready.go:81] duration metric: took 257.92509ms for pod "kube-controller-manager-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.686016    8376 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4clz2" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:26.697277    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0422 16:58:26.785664    8376 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-613799" context rescaled to 1 replicas
	I0422 16:58:26.866410    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0422 16:58:26.866436    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0422 16:58:27.085915    8376 pod_ready.go:92] pod "kube-proxy-4clz2" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:27.085948    8376 pod_ready.go:81] duration metric: took 399.918852ms for pod "kube-proxy-4clz2" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:27.085961    8376 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:27.212101    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0422 16:58:27.212129    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0422 16:58:27.480401    8376 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:27.480426    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0422 16:58:27.485323    8376 pod_ready.go:92] pod "kube-scheduler-addons-613799" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:27.485348    8376 pod_ready.go:81] duration metric: took 399.378872ms for pod "kube-scheduler-addons-613799" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:27.485361    8376 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:27.842497    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0422 16:58:29.497439    8376 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:30.079964    8376 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0422 16:58:30.080072    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:30.110912    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:30.806714    8376 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0422 16:58:31.012739    8376 addons.go:234] Setting addon gcp-auth=true in "addons-613799"
	I0422 16:58:31.012829    8376 host.go:66] Checking if "addons-613799" exists ...
	I0422 16:58:31.013296    8376 cli_runner.go:164] Run: docker container inspect addons-613799 --format={{.State.Status}}
	I0422 16:58:31.032858    8376 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0422 16:58:31.032921    8376 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-613799
	I0422 16:58:31.060580    8376 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/addons-613799/id_rsa Username:docker}
	I0422 16:58:31.541095    8376 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:32.190846    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.358506299s)
	I0422 16:58:32.190888    8376 addons.go:470] Verifying addon ingress=true in "addons-613799"
	I0422 16:58:32.194385    8376 out.go:177] * Verifying ingress addon...
	I0422 16:58:32.191021    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.342529469s)
	I0422 16:58:32.191045    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.33174725s)
	I0422 16:58:32.191074    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.280859253s)
	I0422 16:58:32.191124    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.011410772s)
	I0422 16:58:32.191160    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.942441041s)
	I0422 16:58:32.191177    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.946818393s)
	I0422 16:58:32.191204    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.389812326s)
	I0422 16:58:32.191255    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.300743758s)
	I0422 16:58:32.191283    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.137505922s)
	I0422 16:58:32.191357    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.852917551s)
	I0422 16:58:32.191416    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.49410898s)
	I0422 16:58:32.200565    8376 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0422 16:58:32.201525    8376 addons.go:470] Verifying addon registry=true in "addons-613799"
	I0422 16:58:32.209667    8376 out.go:177] * Verifying registry addon...
	I0422 16:58:32.210429    8376 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0422 16:58:32.213623    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:32.217976    8376 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-613799 service yakd-dashboard -n yakd-dashboard
	
	W0422 16:58:32.208716    8376 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:32.218070    8376 retry.go:31] will retry after 170.856831ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0422 16:58:32.214022    8376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0422 16:58:32.208412    8376 addons.go:470] Verifying addon metrics-server=true in "addons-613799"
	W0422 16:58:32.224150    8376 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0422 16:58:32.229497    8376 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0422 16:58:32.229575    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:32.391943    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0422 16:58:32.705309    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:32.726268    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:33.206314    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:33.226292    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:33.706204    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:33.725636    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:33.995389    8376 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:34.205769    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:34.245866    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:34.517610    8376 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.48471733s)
	I0422 16:58:34.523767    8376 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0422 16:58:34.517889    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.675322186s)
	I0422 16:58:34.520291    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.128262362s)
	I0422 16:58:34.526741    8376 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0422 16:58:34.523913    8376 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-613799"
	I0422 16:58:34.529443    8376 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0422 16:58:34.529540    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0422 16:58:34.531478    8376 out.go:177] * Verifying csi-hostpath-driver addon...
	I0422 16:58:34.534332    8376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0422 16:58:34.541128    8376 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0422 16:58:34.541198    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:34.562235    8376 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0422 16:58:34.562312    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0422 16:58:34.603377    8376 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:34.603401    8376 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0422 16:58:34.637936    8376 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0422 16:58:34.704989    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:34.725522    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:35.042113    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:35.205588    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:35.225444    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:35.560722    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:35.684501    8376 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.046524509s)
	I0422 16:58:35.687757    8376 addons.go:470] Verifying addon gcp-auth=true in "addons-613799"
	I0422 16:58:35.690217    8376 out.go:177] * Verifying gcp-auth addon...
	I0422 16:58:35.693669    8376 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0422 16:58:35.701442    8376 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0422 16:58:35.701467    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:35.716106    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:35.730107    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:36.040975    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:36.197783    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:36.206075    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:36.226275    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:36.492149    8376 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace has status "Ready":"False"
	I0422 16:58:36.541245    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:36.697153    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:36.705249    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:36.725870    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:37.043179    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:37.197353    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:37.205782    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:37.225823    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:37.540697    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:37.697803    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:37.705420    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:37.726520    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:37.991703    8376 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace has status "Ready":"True"
	I0422 16:58:37.991731    8376 pod_ready.go:81] duration metric: took 10.506362076s for pod "nvidia-device-plugin-daemonset-rrg6b" in "kube-system" namespace to be "Ready" ...
	I0422 16:58:37.991741    8376 pod_ready.go:38] duration metric: took 11.705597614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 16:58:37.991782    8376 api_server.go:52] waiting for apiserver process to appear ...
	I0422 16:58:37.991875    8376 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 16:58:38.009750    8376 api_server.go:72] duration metric: took 15.202258844s to wait for apiserver process to appear ...
	I0422 16:58:38.009780    8376 api_server.go:88] waiting for apiserver healthz status ...
	I0422 16:58:38.009823    8376 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0422 16:58:38.018042    8376 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0422 16:58:38.019136    8376 api_server.go:141] control plane version: v1.30.0
	I0422 16:58:38.019168    8376 api_server.go:131] duration metric: took 9.379468ms to wait for apiserver health ...
	I0422 16:58:38.019179    8376 system_pods.go:43] waiting for kube-system pods to appear ...
	I0422 16:58:38.028936    8376 system_pods.go:59] 17 kube-system pods found
	I0422 16:58:38.028970    8376 system_pods.go:61] "coredns-7db6d8ff4d-49264" [70a99261-d58b-4d2a-a79d-2852a3c25d75] Running
	I0422 16:58:38.028981    8376 system_pods.go:61] "csi-hostpath-attacher-0" [779e6562-16eb-448d-9307-5ea7c6d2b1d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:58:38.029019    8376 system_pods.go:61] "csi-hostpath-resizer-0" [74c0b93b-a762-4724-9dbd-67ea890f5660] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:58:38.029031    8376 system_pods.go:61] "csi-hostpathplugin-lff8w" [36bed39a-1ee8-4138-82e5-f5215af6c6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:58:38.029042    8376 system_pods.go:61] "etcd-addons-613799" [9512d363-37e6-450f-9c45-a47ee15878a3] Running
	I0422 16:58:38.029048    8376 system_pods.go:61] "kube-apiserver-addons-613799" [a3b85fb3-e230-4656-afa7-9582b95d2a8d] Running
	I0422 16:58:38.029052    8376 system_pods.go:61] "kube-controller-manager-addons-613799" [3078c5de-fb85-4e5e-99ae-beb4cdeb1e12] Running
	I0422 16:58:38.029066    8376 system_pods.go:61] "kube-ingress-dns-minikube" [e686b0b2-8006-4cd8-879a-c2828e33f5b0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 16:58:38.029092    8376 system_pods.go:61] "kube-proxy-4clz2" [a1562ce4-2208-46ce-9182-f62c32c49503] Running
	I0422 16:58:38.029097    8376 system_pods.go:61] "kube-scheduler-addons-613799" [ee942baa-2445-4609-bdae-6352d5c4272e] Running
	I0422 16:58:38.029117    8376 system_pods.go:61] "metrics-server-c59844bb4-6pqqb" [8528911a-6f9f-4adb-9fb9-16526fdd739f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:58:38.029128    8376 system_pods.go:61] "nvidia-device-plugin-daemonset-rrg6b" [aaec657e-0a16-47e1-b7fa-19d45d1b473a] Running
	I0422 16:58:38.029135    8376 system_pods.go:61] "registry-7sjdx" [fe3a22e5-a967-42c5-a577-12e41a8d87a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:58:38.029146    8376 system_pods.go:61] "registry-proxy-vvsgz" [522abbcd-88fe-48d2-801e-abcd5103e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:58:38.029154    8376 system_pods.go:61] "snapshot-controller-745499f584-shbmc" [8d23d52e-0843-4881-abe5-2b6522a4510e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:58:38.029166    8376 system_pods.go:61] "snapshot-controller-745499f584-vmqgr" [c8fd72cd-4a25-46e1-88c6-6447e7c5f0a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:58:38.029171    8376 system_pods.go:61] "storage-provisioner" [7f2ed73a-722e-4be7-af2d-181a0408951c] Running
	I0422 16:58:38.029177    8376 system_pods.go:74] duration metric: took 9.973601ms to wait for pod list to return data ...
	I0422 16:58:38.029201    8376 default_sa.go:34] waiting for default service account to be created ...
	I0422 16:58:38.031712    8376 default_sa.go:45] found service account: "default"
	I0422 16:58:38.031740    8376 default_sa.go:55] duration metric: took 2.532011ms for default service account to be created ...
	I0422 16:58:38.031751    8376 system_pods.go:116] waiting for k8s-apps to be running ...
	I0422 16:58:38.042590    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:38.047195    8376 system_pods.go:86] 17 kube-system pods found
	I0422 16:58:38.047232    8376 system_pods.go:89] "coredns-7db6d8ff4d-49264" [70a99261-d58b-4d2a-a79d-2852a3c25d75] Running
	I0422 16:58:38.047243    8376 system_pods.go:89] "csi-hostpath-attacher-0" [779e6562-16eb-448d-9307-5ea7c6d2b1d8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0422 16:58:38.047273    8376 system_pods.go:89] "csi-hostpath-resizer-0" [74c0b93b-a762-4724-9dbd-67ea890f5660] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0422 16:58:38.047291    8376 system_pods.go:89] "csi-hostpathplugin-lff8w" [36bed39a-1ee8-4138-82e5-f5215af6c6cd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0422 16:58:38.047300    8376 system_pods.go:89] "etcd-addons-613799" [9512d363-37e6-450f-9c45-a47ee15878a3] Running
	I0422 16:58:38.047310    8376 system_pods.go:89] "kube-apiserver-addons-613799" [a3b85fb3-e230-4656-afa7-9582b95d2a8d] Running
	I0422 16:58:38.047315    8376 system_pods.go:89] "kube-controller-manager-addons-613799" [3078c5de-fb85-4e5e-99ae-beb4cdeb1e12] Running
	I0422 16:58:38.047329    8376 system_pods.go:89] "kube-ingress-dns-minikube" [e686b0b2-8006-4cd8-879a-c2828e33f5b0] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0422 16:58:38.047351    8376 system_pods.go:89] "kube-proxy-4clz2" [a1562ce4-2208-46ce-9182-f62c32c49503] Running
	I0422 16:58:38.047363    8376 system_pods.go:89] "kube-scheduler-addons-613799" [ee942baa-2445-4609-bdae-6352d5c4272e] Running
	I0422 16:58:38.047369    8376 system_pods.go:89] "metrics-server-c59844bb4-6pqqb" [8528911a-6f9f-4adb-9fb9-16526fdd739f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0422 16:58:38.047388    8376 system_pods.go:89] "nvidia-device-plugin-daemonset-rrg6b" [aaec657e-0a16-47e1-b7fa-19d45d1b473a] Running
	I0422 16:58:38.047395    8376 system_pods.go:89] "registry-7sjdx" [fe3a22e5-a967-42c5-a577-12e41a8d87a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0422 16:58:38.047408    8376 system_pods.go:89] "registry-proxy-vvsgz" [522abbcd-88fe-48d2-801e-abcd5103e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0422 16:58:38.047415    8376 system_pods.go:89] "snapshot-controller-745499f584-shbmc" [8d23d52e-0843-4881-abe5-2b6522a4510e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:58:38.047427    8376 system_pods.go:89] "snapshot-controller-745499f584-vmqgr" [c8fd72cd-4a25-46e1-88c6-6447e7c5f0a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0422 16:58:38.047432    8376 system_pods.go:89] "storage-provisioner" [7f2ed73a-722e-4be7-af2d-181a0408951c] Running
	I0422 16:58:38.047440    8376 system_pods.go:126] duration metric: took 15.683283ms to wait for k8s-apps to be running ...
	I0422 16:58:38.047458    8376 system_svc.go:44] waiting for kubelet service to be running ....
	I0422 16:58:38.047529    8376 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 16:58:38.063600    8376 system_svc.go:56] duration metric: took 16.130442ms WaitForService to wait for kubelet
	I0422 16:58:38.063630    8376 kubeadm.go:576] duration metric: took 15.256143539s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 16:58:38.063677    8376 node_conditions.go:102] verifying NodePressure condition ...
	I0422 16:58:38.066903    8376 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0422 16:58:38.066939    8376 node_conditions.go:123] node cpu capacity is 2
	I0422 16:58:38.066954    8376 node_conditions.go:105] duration metric: took 3.270067ms to run NodePressure ...
	I0422 16:58:38.066986    8376 start.go:240] waiting for startup goroutines ...
	I0422 16:58:38.198340    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:38.206011    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:38.226671    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:38.541241    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:38.698549    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:38.705435    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:38.725926    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.040512    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:39.197944    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:39.205123    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:39.226384    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:39.542085    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:39.697690    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:39.704829    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:39.725512    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.054226    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:40.197703    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:40.204801    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.225425    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:40.540227    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:40.697763    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:40.705543    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:40.726794    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.040883    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.197858    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:41.206688    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.226509    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:41.540431    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:41.698029    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:41.706044    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:41.728692    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.041098    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.198500    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:42.206391    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.226128    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:42.540475    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:42.696887    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:42.704883    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:42.725307    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.040026    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.198434    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:43.205641    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.224936    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:43.539562    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:43.698306    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:43.704691    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:43.725331    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.041247    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.197929    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.205290    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.226135    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:44.540924    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:44.697579    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:44.704535    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:44.724918    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.044807    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.202817    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.207720    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.227286    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:45.539550    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:45.696938    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:45.705398    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:45.726447    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.039902    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.197244    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.205037    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.225392    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:46.539842    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:46.697141    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:46.705330    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:46.725078    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.040602    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.196929    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.207463    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.225990    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:47.541566    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:47.698998    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:47.704990    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:47.736289    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.043185    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.197786    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.207552    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.225424    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:48.540511    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:48.697942    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:48.704826    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:48.725518    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.040297    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.199238    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.205841    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.226016    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:49.540994    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:49.697883    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:49.705166    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:49.726830    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.041245    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.197725    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.205589    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.243667    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:50.540583    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:50.697137    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:50.705138    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:50.725920    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.040708    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.197293    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.216510    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.225181    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:51.540204    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:51.697230    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:51.705100    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:51.725821    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.040144    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.197650    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.205537    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.225195    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:52.539513    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:52.697740    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:52.707264    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:52.725789    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.040712    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.197531    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.204389    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.227957    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:53.540282    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:53.704458    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:53.708689    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:53.725401    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.046106    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.201685    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.205583    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.229802    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:54.539712    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:54.697003    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:54.705693    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:54.726518    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.040473    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:55.198090    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.205736    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.225245    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:55.540910    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:55.697708    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:55.705231    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:55.726018    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.040043    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.197592    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.205506    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.228497    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:56.541176    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:56.698128    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:56.705926    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:56.726957    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.052636    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.199483    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.208959    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.225940    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:57.540125    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:57.698894    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:57.708147    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:57.725860    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.043458    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.200319    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.210886    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.228008    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:58.540676    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:58.699134    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:58.705784    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:58.728590    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.041049    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.197318    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.206407    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.226145    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:58:59.549763    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:58:59.698170    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:58:59.705649    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:58:59.725342    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.081040    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.198108    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.208470    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.236614    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:00.545404    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:00.697791    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:00.704915    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:00.725435    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.039662    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.197351    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.205411    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.226360    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:01.540935    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:01.697806    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:01.705721    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:01.725852    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.041109    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.198239    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.206707    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.225639    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0422 16:59:02.540907    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:02.697617    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:02.707211    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:02.727226    8376 kapi.go:107] duration metric: took 30.513202712s to wait for kubernetes.io/minikube-addons=registry ...
	I0422 16:59:03.041446    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.198075    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.205015    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:03.541110    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:03.697774    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:03.704606    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.040254    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.197559    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.204976    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:04.542284    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:04.697713    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:04.705966    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.041309    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.197405    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.204840    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:05.540202    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:05.697525    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:05.705666    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.041225    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.198130    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.205502    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:06.542855    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:06.700906    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:06.707266    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.042836    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.198303    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.205628    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:07.540976    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:07.697805    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:07.704992    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.041404    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.197969    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.205326    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:08.540446    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:08.697868    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:08.705675    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.041204    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.197270    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.205552    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:09.540626    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:09.697156    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:09.705525    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.041021    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.197741    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.204724    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:10.540518    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:10.698329    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:10.705797    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.043129    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.199976    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.212577    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:11.542073    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:11.697696    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:11.705401    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.040745    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.197282    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.205914    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:12.542355    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:12.698326    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:12.705781    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.041584    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.197256    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.205860    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:13.562168    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:13.708020    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:13.712829    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.049242    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.198107    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.205628    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:14.544899    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:14.697770    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:14.705013    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.044836    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.198503    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.206221    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:15.542282    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:15.698156    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:15.705582    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.041525    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.196973    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.205572    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:16.540222    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:16.705203    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:16.705549    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.040119    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.197245    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.204806    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:17.540297    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:17.697869    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:17.705334    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.040080    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.197779    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.205140    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:18.540386    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:18.699271    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:18.705672    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.040363    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.197960    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.205092    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:19.539742    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:19.697298    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:19.705573    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.043837    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.197503    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.204258    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:20.546231    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0422 16:59:20.697627    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:20.706891    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.040955    8376 kapi.go:107] duration metric: took 46.50662013s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0422 16:59:21.197365    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.205443    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:21.697429    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:21.705482    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.197222    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.205379    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:22.697001    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:22.707028    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.197062    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.205132    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:23.698049    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:23.705360    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.197169    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.205295    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:24.696976    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:24.705279    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.197084    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.205219    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:25.696986    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:25.704459    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.197060    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.205156    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:26.696999    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:26.705014    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.197758    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.205696    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:27.697055    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:27.704470    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.197407    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.205446    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:28.697893    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:28.704889    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.197885    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.207945    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:29.700296    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:29.712416    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.197446    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.205163    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:30.697829    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:30.705084    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.197933    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.204606    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:31.697856    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:31.704972    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.198005    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.204971    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:32.698066    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:32.705107    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.197020    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.205287    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:33.698472    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:33.705527    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.197083    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.204612    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:34.697191    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:34.705182    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.198251    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.205437    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:35.697872    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:35.705691    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.197116    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.204549    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:36.697286    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:36.705061    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.198027    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.205080    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:37.697881    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:37.705321    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.198252    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.205521    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:38.697492    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:38.711742    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.196746    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.204951    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:39.698397    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:39.706571    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.198347    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.205511    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:40.697731    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:40.705350    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.199482    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.206080    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:41.697687    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:41.705543    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.199047    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.211450    8376 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0422 16:59:42.697154    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:42.705099    8376 kapi.go:107] duration metric: took 1m10.504530581s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0422 16:59:43.198246    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:43.699048    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.198723    8376 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0422 16:59:44.696896    8376 kapi.go:107] duration metric: took 1m9.003225318s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0422 16:59:44.699253    8376 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-613799 cluster.
	I0422 16:59:44.701456    8376 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0422 16:59:44.704029    8376 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0422 16:59:44.706323    8376 out.go:177] * Enabled addons: inspektor-gadget, nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0422 16:59:44.708297    8376 addons.go:505] duration metric: took 1m21.900538153s for enable addons: enabled=[inspektor-gadget nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner yakd metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0422 16:59:44.708342    8376 start.go:245] waiting for cluster config update ...
	I0422 16:59:44.708365    8376 start.go:254] writing updated cluster config ...
	I0422 16:59:44.708681    8376 ssh_runner.go:195] Run: rm -f paused
	I0422 16:59:45.084584    8376 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0422 16:59:45.086670    8376 out.go:177] * Done! kubectl is now configured to use "addons-613799" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 22 17:00:43 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:00:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d185628fd8043224fb7dccdec494dbcddc47c6f40612674c11c762acedbc4d02/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Apr 22 17:00:43 addons-613799 dockerd[1146]: time="2024-04-22T17:00:43.327567303Z" level=warning msg="reference for unknown type: " digest="sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f" remote="ghcr.io/headlamp-k8s/headlamp@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f" spanID=df6ba32e7187691f traceID=33be465e04513ef3f3a9140c1669da8d
	Apr 22 17:00:46 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:00:46Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.23.1@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f"
	Apr 22 17:00:49 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:00:49Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1"
	Apr 22 17:00:50 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:00:50Z" level=error msg="error getting RW layer size for container ID '311055170c67fc06afc958ea3e8765ad1cb3348e61651bb5b15f348d374144c9': Error response from daemon: No such container: 311055170c67fc06afc958ea3e8765ad1cb3348e61651bb5b15f348d374144c9"
	Apr 22 17:00:50 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:00:50Z" level=error msg="Set backoffDuration to : 1m0s for container ID '311055170c67fc06afc958ea3e8765ad1cb3348e61651bb5b15f348d374144c9'"
	Apr 22 17:00:51 addons-613799 dockerd[1146]: time="2024-04-22T17:00:51.147266289Z" level=info msg="ignoring event" container=308b485f65d09392d524a22e2ecbf966798ca8f948a0f11d8135326aaf0a1ec1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:00:59 addons-613799 dockerd[1146]: time="2024-04-22T17:00:59.118653265Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=fa88f8496fd2174bc661f5ccad1b8007cbae7d9376515912e9110941d669973f spanID=3bdde3d26e3bfdba traceID=d3c83960ac660ae39cac551fcb3d6fd6
	Apr 22 17:00:59 addons-613799 dockerd[1146]: time="2024-04-22T17:00:59.147273060Z" level=info msg="ignoring event" container=fa88f8496fd2174bc661f5ccad1b8007cbae7d9376515912e9110941d669973f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:00:59 addons-613799 dockerd[1146]: time="2024-04-22T17:00:59.252002450Z" level=info msg="ignoring event" container=fb9ef4f6d26fe635be681472c52c7f5aa4086f8eaf2720bb75e55c3b1b4f09ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:00:59 addons-613799 dockerd[1146]: time="2024-04-22T17:00:59.370349371Z" level=info msg="ignoring event" container=548b3e6de2fe5f6f17672e0197f209b20d88aa9985dd204ed57853f34c3837eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:00:59 addons-613799 dockerd[1146]: time="2024-04-22T17:00:59.485367229Z" level=info msg="ignoring event" container=1dd31d9063246c46fe183bb02130e07733d16047422fb22ab9902a883a8c7432 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:03 addons-613799 dockerd[1146]: time="2024-04-22T17:01:03.959924490Z" level=info msg="ignoring event" container=81cdbe00ef1c3c80a3444eb96d96bb4b17f121f914db45b923d3f257e06d6530 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:10 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:01:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3333d4639b31b2bfd4e061f4b1128349b277dcd27eaf5f14f93654374082fb92/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Apr 22 17:01:12 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:01:12Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Apr 22 17:01:19 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:01:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/779991be09e022697632fd14308630e1be8e8813688e199db5606c022d395e2f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Apr 22 17:01:21 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:01:21Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Apr 22 17:01:22 addons-613799 dockerd[1146]: time="2024-04-22T17:01:22.106316406Z" level=info msg="ignoring event" container=2c55cf68e01047eabd7f04a3fe1316124f2424d577e16cecdb5c81743b85b875 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:22 addons-613799 dockerd[1146]: time="2024-04-22T17:01:22.400738201Z" level=info msg="ignoring event" container=a335d08830aaba96efaba766f36cff1639368b35ed71cbb7d977aa403aca88ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:35 addons-613799 dockerd[1146]: time="2024-04-22T17:01:35.252888268Z" level=info msg="ignoring event" container=cc6b34661cec8a7ae4f2c813c4f8d329d845eabcf7fd29d06df148e6aea78e56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:35 addons-613799 dockerd[1146]: time="2024-04-22T17:01:35.966627345Z" level=info msg="ignoring event" container=adf69b7e977941f3005d1c2395811c76ec3355b1d1388b8ae46bc6922a861eef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:39 addons-613799 dockerd[1146]: time="2024-04-22T17:01:39.425203682Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=2ffe8142931d3786392344cde596437be1612d9f93b30f4ffcbcfea4fbe39ac4 spanID=07862bca8f561be1 traceID=71bec72ad1c66a4c0c740830e749225e
	Apr 22 17:01:39 addons-613799 dockerd[1146]: time="2024-04-22T17:01:39.484738028Z" level=info msg="ignoring event" container=2ffe8142931d3786392344cde596437be1612d9f93b30f4ffcbcfea4fbe39ac4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 22 17:01:39 addons-613799 cri-dockerd[1358]: time="2024-04-22T17:01:39Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-84df5799c-l9gw4_ingress-nginx\": unexpected command output nsenter: cannot open /proc/7980/ns/net: No such file or directory\n with error: exit status 1"
	Apr 22 17:01:39 addons-613799 dockerd[1146]: time="2024-04-22T17:01:39.615517103Z" level=info msg="ignoring event" container=a57851946116a348bd030bfdba414e9aaa97d1a9b73add72af0e9a9d3ebdc94b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	adf69b7e97794       dd1b12fcb6097                                                                                                                9 seconds ago       Exited              hello-world-app           2                   779991be09e02       hello-world-app-86c47465fc-g26z8
	cb806f1d6d2d7       nginx@sha256:7bd88800d8c18d4f73feeee25e04fcdbeecfc5e0a2b7254a90f4816bb67beadd                                                32 seconds ago      Running             nginx                     0                   3333d4639b31b       nginx
	76302fb56d844       ghcr.io/headlamp-k8s/headlamp@sha256:dd9e2ad6ae6d23761372bc9cc0dbcb47aacd6a31986827b43ac207cecb25c39f                        58 seconds ago      Running             headlamp                  0                   d185628fd8043       headlamp-7559bf459f-8zwkj
	0f75e92f9d7a2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 2 minutes ago       Running             gcp-auth                  0                   63cde745aad97       gcp-auth-5db96cd9b4-8ksdd
	2fe6b0927e1e7       1a024e390dd05                                                                                                                2 minutes ago       Exited              patch                     1                   ef9040460e5f5       ingress-nginx-admission-patch-jv6r7
	9e5e457040168       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   2 minutes ago       Exited              create                    0                   aeb350b252a49       ingress-nginx-admission-create-wfkw6
	3e2b323e1c26e       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        2 minutes ago       Running             yakd                      0                   d758fa9b060e6       yakd-dashboard-5ddbf7d777-sc655
	4d6e2db150b34       ba04bb24b9575                                                                                                                3 minutes ago       Running             storage-provisioner       0                   967c5daf035d0       storage-provisioner
	f7771d3faa790       2437cf7621777                                                                                                                3 minutes ago       Running             coredns                   0                   a5c869504248f       coredns-7db6d8ff4d-49264
	42370e1e04575       cb7eac0b42cc1                                                                                                                3 minutes ago       Running             kube-proxy                0                   7d81658da029c       kube-proxy-4clz2
	81d92ceb51c5d       68feac521c0f1                                                                                                                3 minutes ago       Running             kube-controller-manager   0                   cff4da1e08c9d       kube-controller-manager-addons-613799
	c21fa955fe51b       181f57fd3cdb7                                                                                                                3 minutes ago       Running             kube-apiserver            0                   057e7d7ea351e       kube-apiserver-addons-613799
	fc150dafead7a       547adae34140b                                                                                                                3 minutes ago       Running             kube-scheduler            0                   9183d41f6dd31       kube-scheduler-addons-613799
	31e3fbb27550a       014faa467e297                                                                                                                3 minutes ago       Running             etcd                      0                   3db902b7a80a9       etcd-addons-613799
	
	
	==> coredns [f7771d3faa79] <==
	[INFO] 10.244.0.20:55700 - 36994 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000186711s
	[INFO] 10.244.0.20:55700 - 64869 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00340613s
	[INFO] 10.244.0.20:58754 - 63158 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00474075s
	[INFO] 10.244.0.20:58754 - 26090 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002392758s
	[INFO] 10.244.0.20:55700 - 18933 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002926291s
	[INFO] 10.244.0.20:58754 - 59797 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000148148s
	[INFO] 10.244.0.20:55700 - 58087 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071211s
	[INFO] 10.244.0.20:56452 - 64906 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000137489s
	[INFO] 10.244.0.20:44821 - 768 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059666s
	[INFO] 10.244.0.20:44821 - 24698 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076995s
	[INFO] 10.244.0.20:56452 - 47714 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056294s
	[INFO] 10.244.0.20:44821 - 1395 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057746s
	[INFO] 10.244.0.20:56452 - 63008 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00004964s
	[INFO] 10.244.0.20:44821 - 21858 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058657s
	[INFO] 10.244.0.20:56452 - 13417 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045373s
	[INFO] 10.244.0.20:44821 - 8529 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048319s
	[INFO] 10.244.0.20:56452 - 65095 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000139573s
	[INFO] 10.244.0.20:44821 - 38209 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000111759s
	[INFO] 10.244.0.20:56452 - 26378 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044306s
	[INFO] 10.244.0.20:56452 - 16659 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001564718s
	[INFO] 10.244.0.20:44821 - 9718 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001579011s
	[INFO] 10.244.0.20:44821 - 640 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001194496s
	[INFO] 10.244.0.20:56452 - 47020 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001789541s
	[INFO] 10.244.0.20:44821 - 61148 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045225s
	[INFO] 10.244.0.20:56452 - 28951 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046604s
	
	
	==> describe nodes <==
	Name:               addons-613799
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-613799
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=addons-613799
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T16_58_10_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-613799
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 16:58:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-613799
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 17:01:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 17:01:14 +0000   Mon, 22 Apr 2024 16:58:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 17:01:14 +0000   Mon, 22 Apr 2024 16:58:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 17:01:14 +0000   Mon, 22 Apr 2024 16:58:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 17:01:14 +0000   Mon, 22 Apr 2024 16:58:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-613799
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8569915c07e46de9508ef28de500de0
	  System UUID:                76e86a3d-e1fd-49d8-8e06-4b37d1d78fdf
	  Boot ID:                    10a06b61-013b-4e8e-82bb-900d7f84a0de
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-g26z8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-5db96cd9b4-8ksdd                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  headlamp                    headlamp-7559bf459f-8zwkj                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 coredns-7db6d8ff4d-49264                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m21s
	  kube-system                 etcd-addons-613799                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-apiserver-addons-613799             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-controller-manager-addons-613799    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-proxy-4clz2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  kube-system                 kube-scheduler-addons-613799             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-sc655          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node addons-613799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node addons-613799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node addons-613799 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s                  kubelet          Node addons-613799 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s                  kubelet          Node addons-613799 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s                  kubelet          Node addons-613799 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m35s                  kubelet          Node addons-613799 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m25s                  kubelet          Node addons-613799 status is now: NodeReady
	  Normal  RegisteredNode           3m22s                  node-controller  Node addons-613799 event: Registered Node addons-613799 in Controller
	
	
	==> dmesg <==
	[Apr22 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015433] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.522540] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002942] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.018692] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004685] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003974] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.663946] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.869211] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [31e3fbb27550] <==
	{"level":"info","ts":"2024-04-22T16:58:03.687411Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T16:58:03.687419Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-22T16:58:03.690873Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-22T16:58:03.691079Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-22T16:58:03.691108Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-22T16:58:03.691189Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-22T16:58:03.691199Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-04-22T16:58:04.676815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-22T16:58:04.676915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-22T16:58:04.67696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-04-22T16:58:04.677015Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-04-22T16:58:04.677048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-22T16:58:04.677092Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-04-22T16:58:04.677131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-04-22T16:58:04.684403Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T16:58:04.684741Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-613799 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-22T16:58:04.687663Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T16:58:04.687786Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T16:58:04.687853Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-22T16:58:04.687893Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T16:58:04.688208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-22T16:58:04.689916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-22T16:58:04.691431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-04-22T16:58:04.694555Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-22T16:58:04.725294Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [0f75e92f9d7a] <==
	2024/04/22 16:59:44 GCP Auth Webhook started!
	2024/04/22 16:59:56 Ready to marshal response ...
	2024/04/22 16:59:56 Ready to write response ...
	2024/04/22 16:59:57 Ready to marshal response ...
	2024/04/22 16:59:57 Ready to write response ...
	2024/04/22 17:00:17 Ready to marshal response ...
	2024/04/22 17:00:18 Ready to write response ...
	2024/04/22 17:00:19 Ready to marshal response ...
	2024/04/22 17:00:19 Ready to write response ...
	2024/04/22 17:00:19 Ready to marshal response ...
	2024/04/22 17:00:19 Ready to write response ...
	2024/04/22 17:00:28 Ready to marshal response ...
	2024/04/22 17:00:28 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:00:42 Ready to marshal response ...
	2024/04/22 17:00:42 Ready to write response ...
	2024/04/22 17:01:09 Ready to marshal response ...
	2024/04/22 17:01:09 Ready to write response ...
	2024/04/22 17:01:19 Ready to marshal response ...
	2024/04/22 17:01:19 Ready to write response ...
	
	
	==> kernel <==
	 17:01:44 up 44 min,  0 users,  load average: 3.36, 2.65, 1.16
	Linux addons-613799 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c21fa955fe51] <==
	I0422 16:58:59.586428       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0422 17:00:09.836587       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0422 17:00:34.835983       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:00:34.836241       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:00:34.862253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:00:34.862309       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:00:34.882736       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:00:34.882788       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:00:34.900602       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:00:34.901256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0422 17:00:34.912171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0422 17:00:34.912212       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0422 17:00:35.883066       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0422 17:00:35.912175       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0422 17:00:35.965681       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0422 17:00:42.602518       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.208.93"}
	E0422 17:00:44.124530       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0422 17:01:00.510315       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0422 17:01:03.870040       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0422 17:01:04.907810       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0422 17:01:09.494798       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0422 17:01:09.819513       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.205.238"}
	I0422 17:01:19.482786       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.190.145"}
	E0422 17:01:36.461814       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0422 17:01:36.487154       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [81d92ceb51c5] <==
	W0422 17:01:13.500811       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:01:13.500853       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:01:14.040139       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0422 17:01:16.584350       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0422 17:01:17.053290       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:01:17.053328       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0422 17:01:17.867820       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:01:17.867854       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:01:19.296932       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.834049ms"
	I0422 17:01:19.309279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="12.282296ms"
	I0422 17:01:19.310621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.068µs"
	I0422 17:01:19.315919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="37.611µs"
	I0422 17:01:22.287734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.46µs"
	I0422 17:01:22.909427       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0422 17:01:22.909468       1 shared_informer.go:320] Caches are synced for resource quota
	W0422 17:01:23.304461       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0422 17:01:23.304504       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0422 17:01:23.308631       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="42.509µs"
	I0422 17:01:23.354439       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0422 17:01:23.354492       1 shared_informer.go:320] Caches are synced for garbage collector
	I0422 17:01:24.326038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="44.88µs"
	I0422 17:01:36.375138       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0422 17:01:36.382297       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-84df5799c" duration="99.115µs"
	I0422 17:01:36.384560       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0422 17:01:36.485934       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="56.13µs"
	
	
	==> kube-proxy [42370e1e0457] <==
	I0422 16:58:24.501692       1 server_linux.go:69] "Using iptables proxy"
	I0422 16:58:24.534865       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0422 16:58:24.605983       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0422 16:58:24.606028       1 server_linux.go:165] "Using iptables Proxier"
	I0422 16:58:24.611541       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0422 16:58:24.611573       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0422 16:58:24.611589       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0422 16:58:24.611791       1 server.go:872] "Version info" version="v1.30.0"
	I0422 16:58:24.611805       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0422 16:58:24.617863       1 config.go:192] "Starting service config controller"
	I0422 16:58:24.617877       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0422 16:58:24.617902       1 config.go:101] "Starting endpoint slice config controller"
	I0422 16:58:24.617907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0422 16:58:24.618237       1 config.go:319] "Starting node config controller"
	I0422 16:58:24.618244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0422 16:58:24.719297       1 shared_informer.go:320] Caches are synced for node config
	I0422 16:58:24.719339       1 shared_informer.go:320] Caches are synced for service config
	I0422 16:58:24.719380       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc150dafead7] <==
	W0422 16:58:08.292744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:08.292855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:08.363700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:08.363971       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:08.371010       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 16:58:08.371050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0422 16:58:08.382843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 16:58:08.383093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0422 16:58:08.393026       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 16:58:08.393265       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 16:58:08.443650       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:08.443870       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:08.473296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 16:58:08.473543       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0422 16:58:08.496835       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 16:58:08.497093       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0422 16:58:08.547273       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 16:58:08.547529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0422 16:58:08.593156       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 16:58:08.593399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0422 16:58:08.617645       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 16:58:08.617865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0422 16:58:08.625605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 16:58:08.625825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0422 16:58:10.824371       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 22 17:01:23 addons-613799 kubelet[2200]: I0422 17:01:23.295452    2200 scope.go:117] "RemoveContainer" containerID="a335d08830aaba96efaba766f36cff1639368b35ed71cbb7d977aa403aca88ad"
	Apr 22 17:01:23 addons-613799 kubelet[2200]: E0422 17:01:23.295883    2200 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-g26z8_default(bddf7b1e-f4d5-4817-9980-c7d38b917400)\"" pod="default/hello-world-app-86c47465fc-g26z8" podUID="bddf7b1e-f4d5-4817-9980-c7d38b917400"
	Apr 22 17:01:24 addons-613799 kubelet[2200]: I0422 17:01:24.313590    2200 scope.go:117] "RemoveContainer" containerID="a335d08830aaba96efaba766f36cff1639368b35ed71cbb7d977aa403aca88ad"
	Apr 22 17:01:24 addons-613799 kubelet[2200]: E0422 17:01:24.314475    2200 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-g26z8_default(bddf7b1e-f4d5-4817-9980-c7d38b917400)\"" pod="default/hello-world-app-86c47465fc-g26z8" podUID="bddf7b1e-f4d5-4817-9980-c7d38b917400"
	Apr 22 17:01:25 addons-613799 kubelet[2200]: I0422 17:01:25.799540    2200 scope.go:117] "RemoveContainer" containerID="6a1ca433a9023cb34fcb46c3e969814f07ada950a4e5956e4f57cfd326dd1ea1"
	Apr 22 17:01:25 addons-613799 kubelet[2200]: E0422 17:01:25.800207    2200 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(e686b0b2-8006-4cd8-879a-c2828e33f5b0)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="e686b0b2-8006-4cd8-879a-c2828e33f5b0"
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.423511    2200 scope.go:117] "RemoveContainer" containerID="6a1ca433a9023cb34fcb46c3e969814f07ada950a4e5956e4f57cfd326dd1ea1"
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.430671    2200 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbkcj\" (UniqueName: \"kubernetes.io/projected/e686b0b2-8006-4cd8-879a-c2828e33f5b0-kube-api-access-qbkcj\") pod \"e686b0b2-8006-4cd8-879a-c2828e33f5b0\" (UID: \"e686b0b2-8006-4cd8-879a-c2828e33f5b0\") "
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.435443    2200 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e686b0b2-8006-4cd8-879a-c2828e33f5b0-kube-api-access-qbkcj" (OuterVolumeSpecName: "kube-api-access-qbkcj") pod "e686b0b2-8006-4cd8-879a-c2828e33f5b0" (UID: "e686b0b2-8006-4cd8-879a-c2828e33f5b0"). InnerVolumeSpecName "kube-api-access-qbkcj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.531290    2200 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qbkcj\" (UniqueName: \"kubernetes.io/projected/e686b0b2-8006-4cd8-879a-c2828e33f5b0-kube-api-access-qbkcj\") on node \"addons-613799\" DevicePath \"\""
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.801527    2200 scope.go:117] "RemoveContainer" containerID="a335d08830aaba96efaba766f36cff1639368b35ed71cbb7d977aa403aca88ad"
	Apr 22 17:01:35 addons-613799 kubelet[2200]: I0422 17:01:35.820881    2200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e686b0b2-8006-4cd8-879a-c2828e33f5b0" path="/var/lib/kubelet/pods/e686b0b2-8006-4cd8-879a-c2828e33f5b0/volumes"
	Apr 22 17:01:36 addons-613799 kubelet[2200]: I0422 17:01:36.465174    2200 scope.go:117] "RemoveContainer" containerID="a335d08830aaba96efaba766f36cff1639368b35ed71cbb7d977aa403aca88ad"
	Apr 22 17:01:36 addons-613799 kubelet[2200]: I0422 17:01:36.465648    2200 scope.go:117] "RemoveContainer" containerID="adf69b7e977941f3005d1c2395811c76ec3355b1d1388b8ae46bc6922a861eef"
	Apr 22 17:01:36 addons-613799 kubelet[2200]: E0422 17:01:36.466140    2200 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-g26z8_default(bddf7b1e-f4d5-4817-9980-c7d38b917400)\"" pod="default/hello-world-app-86c47465fc-g26z8" podUID="bddf7b1e-f4d5-4817-9980-c7d38b917400"
	Apr 22 17:01:37 addons-613799 kubelet[2200]: I0422 17:01:37.807676    2200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66780df8-e68a-46c9-8653-01f10ce1aabb" path="/var/lib/kubelet/pods/66780df8-e68a-46c9-8653-01f10ce1aabb/volumes"
	Apr 22 17:01:37 addons-613799 kubelet[2200]: I0422 17:01:37.808141    2200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e9e77c-cd2e-40c6-a5aa-8e75da670790" path="/var/lib/kubelet/pods/a1e9e77c-cd2e-40c6-a5aa-8e75da670790/volumes"
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.758683    2200 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtdv\" (UniqueName: \"kubernetes.io/projected/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-kube-api-access-nhtdv\") pod \"29478d59-bcc3-43e4-afbe-1fc6c594ebc6\" (UID: \"29478d59-bcc3-43e4-afbe-1fc6c594ebc6\") "
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.758744    2200 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-webhook-cert\") pod \"29478d59-bcc3-43e4-afbe-1fc6c594ebc6\" (UID: \"29478d59-bcc3-43e4-afbe-1fc6c594ebc6\") "
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.760716    2200 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "29478d59-bcc3-43e4-afbe-1fc6c594ebc6" (UID: "29478d59-bcc3-43e4-afbe-1fc6c594ebc6"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.764969    2200 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-kube-api-access-nhtdv" (OuterVolumeSpecName: "kube-api-access-nhtdv") pod "29478d59-bcc3-43e4-afbe-1fc6c594ebc6" (UID: "29478d59-bcc3-43e4-afbe-1fc6c594ebc6"). InnerVolumeSpecName "kube-api-access-nhtdv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.813871    2200 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29478d59-bcc3-43e4-afbe-1fc6c594ebc6" path="/var/lib/kubelet/pods/29478d59-bcc3-43e4-afbe-1fc6c594ebc6/volumes"
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.858990    2200 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nhtdv\" (UniqueName: \"kubernetes.io/projected/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-kube-api-access-nhtdv\") on node \"addons-613799\" DevicePath \"\""
	Apr 22 17:01:39 addons-613799 kubelet[2200]: I0422 17:01:39.859049    2200 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/29478d59-bcc3-43e4-afbe-1fc6c594ebc6-webhook-cert\") on node \"addons-613799\" DevicePath \"\""
	Apr 22 17:01:40 addons-613799 kubelet[2200]: I0422 17:01:40.630924    2200 scope.go:117] "RemoveContainer" containerID="2ffe8142931d3786392344cde596437be1612d9f93b30f4ffcbcfea4fbe39ac4"
	
	
	==> storage-provisioner [4d6e2db150b3] <==
	I0422 16:58:28.873149       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 16:58:28.897349       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 16:58:28.897394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 16:58:28.909423       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 16:58:28.911338       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-613799_351d1af6-498a-423c-a5a4-519239f27f70!
	I0422 16:58:28.920385       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11d34d90-be6c-420c-9273-1a33cac2585f", APIVersion:"v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-613799_351d1af6-498a-423c-a5a4-519239f27f70 became leader
	I0422 16:58:29.013956       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-613799_351d1af6-498a-423c-a5a4-519239f27f70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-613799 -n addons-613799
helpers_test.go:261: (dbg) Run:  kubectl --context addons-613799 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (36.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-986384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0422 18:01:40.126600    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:49.507665    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 18:02:00.607125    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:02:17.192199    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 18:02:17.421354    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:02:24.998007    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:02:31.894015    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 18:02:41.567931    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:02:57.000255    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 18:03:03.110437    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.115810    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.126100    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.146427    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.186802    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.267076    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.427454    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:03.747863    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:04.388751    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:05.669227    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:08.229940    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:13.350147    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:23.590304    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:24.685479    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 18:03:37.114822    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.120052    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.130291    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.150534    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.190773    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.271038    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.431445    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:37.751879    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:38.392169    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:39.341968    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:03:39.672764    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:42.233123    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:44.071020    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:03:46.154639    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:03:47.354061    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:03:57.595276    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:04:03.488239    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:04:13.841711    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:04:18.075610    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:04:25.032102    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:04:41.154759    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:04:45.110120    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 18:04:59.036044    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:05:08.839165    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-986384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m12.810640522s)

                                                
                                                
-- stdout --
	* [old-k8s-version-986384] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-986384" primary control-plane node in "old-k8s-version-986384" cluster
	* Pulling base image v0.0.43-1713736339-18706 ...
	* Restarting existing docker container for "old-k8s-version-986384" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.0.2 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-986384 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 18:01:39.097567  365956 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:01:39.097748  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:01:39.097757  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:01:39.097763  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:01:39.098004  365956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 18:01:39.098390  365956 out.go:298] Setting JSON to false
	I0422 18:01:39.099593  365956 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6246,"bootTime":1713802653,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 18:01:39.099667  365956 start.go:139] virtualization:  
	I0422 18:01:39.104069  365956 out.go:177] * [old-k8s-version-986384] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 18:01:39.106595  365956 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:01:39.109008  365956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:01:39.106625  365956 notify.go:220] Checking for updates...
	I0422 18:01:39.113385  365956 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:01:39.116201  365956 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 18:01:39.118454  365956 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0422 18:01:39.120587  365956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:01:39.123368  365956 config.go:182] Loaded profile config "old-k8s-version-986384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0422 18:01:39.125860  365956 out.go:177] * Kubernetes 1.30.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.0
	I0422 18:01:39.127666  365956 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:01:39.147403  365956 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 18:01:39.147585  365956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 18:01:39.210519  365956 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 18:01:39.200168424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 18:01:39.210627  365956 docker.go:295] overlay module found
	I0422 18:01:39.213502  365956 out.go:177] * Using the docker driver based on existing profile
	I0422 18:01:39.215421  365956 start.go:297] selected driver: docker
	I0422 18:01:39.215440  365956 start.go:901] validating driver "docker" against &{Name:old-k8s-version-986384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986384 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:01:39.215555  365956 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:01:39.216372  365956 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 18:01:39.286165  365956 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 18:01:39.277213909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 18:01:39.286519  365956 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:01:39.286575  365956 cni.go:84] Creating CNI manager for ""
	I0422 18:01:39.286594  365956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0422 18:01:39.286631  365956 start.go:340] cluster config:
	{Name:old-k8s-version-986384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:01:39.290477  365956 out.go:177] * Starting "old-k8s-version-986384" primary control-plane node in "old-k8s-version-986384" cluster
	I0422 18:01:39.292112  365956 cache.go:121] Beginning downloading kic base image for docker with docker
	I0422 18:01:39.294273  365956 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0422 18:01:39.296271  365956 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 18:01:39.296348  365956 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0422 18:01:39.296359  365956 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0422 18:01:39.296502  365956 cache.go:56] Caching tarball of preloaded images
	I0422 18:01:39.296610  365956 preload.go:173] Found /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0422 18:01:39.296628  365956 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0422 18:01:39.296737  365956 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/config.json ...
	I0422 18:01:39.310344  365956 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0422 18:01:39.310369  365956 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0422 18:01:39.310388  365956 cache.go:194] Successfully downloaded all kic artifacts
	I0422 18:01:39.310421  365956 start.go:360] acquireMachinesLock for old-k8s-version-986384: {Name:mk06e93d32869e0d45661c0f3956bcfa019e47d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:01:39.310497  365956 start.go:364] duration metric: took 51.666µs to acquireMachinesLock for "old-k8s-version-986384"
	I0422 18:01:39.310525  365956 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:01:39.310535  365956 fix.go:54] fixHost starting: 
	I0422 18:01:39.310792  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:39.325871  365956 fix.go:112] recreateIfNeeded on old-k8s-version-986384: state=Stopped err=<nil>
	W0422 18:01:39.325907  365956 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:01:39.328385  365956 out.go:177] * Restarting existing docker container for "old-k8s-version-986384" ...
	I0422 18:01:39.330755  365956 cli_runner.go:164] Run: docker start old-k8s-version-986384
	I0422 18:01:39.650278  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:39.668129  365956 kic.go:430] container "old-k8s-version-986384" state is running.
	I0422 18:01:39.668516  365956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-986384
	I0422 18:01:39.699420  365956 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/config.json ...
	I0422 18:01:39.699920  365956 machine.go:94] provisionDockerMachine start ...
	I0422 18:01:39.699990  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:39.728937  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:39.729680  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:39.729697  365956 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:01:39.730381  365956 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0422 18:01:42.872376  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986384
	
	I0422 18:01:42.872402  365956 ubuntu.go:169] provisioning hostname "old-k8s-version-986384"
	I0422 18:01:42.872467  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:42.890582  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:42.890867  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:42.890886  365956 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-986384 && echo "old-k8s-version-986384" | sudo tee /etc/hostname
	I0422 18:01:43.034637  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-986384
	
	I0422 18:01:43.034734  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:43.051111  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:43.051359  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:43.051375  365956 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-986384' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-986384/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-986384' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:01:43.177187  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:01:43.177216  365956 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18706-2371/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-2371/.minikube}
	I0422 18:01:43.177234  365956 ubuntu.go:177] setting up certificates
	I0422 18:01:43.177244  365956 provision.go:84] configureAuth start
	I0422 18:01:43.177303  365956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-986384
	I0422 18:01:43.203475  365956 provision.go:143] copyHostCerts
	I0422 18:01:43.203547  365956 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem, removing ...
	I0422 18:01:43.203555  365956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem
	I0422 18:01:43.203635  365956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem (1078 bytes)
	I0422 18:01:43.203732  365956 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem, removing ...
	I0422 18:01:43.203737  365956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem
	I0422 18:01:43.203762  365956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem (1123 bytes)
	I0422 18:01:43.203812  365956 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem, removing ...
	I0422 18:01:43.203816  365956 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem
	I0422 18:01:43.203839  365956 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem (1675 bytes)
	I0422 18:01:43.203884  365956 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-986384 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-986384]
	I0422 18:01:44.043536  365956 provision.go:177] copyRemoteCerts
	I0422 18:01:44.043624  365956 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:01:44.043676  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.061005  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:44.158683  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:01:44.186244  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0422 18:01:44.212243  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:01:44.238471  365956 provision.go:87] duration metric: took 1.061213688s to configureAuth
	I0422 18:01:44.238499  365956 ubuntu.go:193] setting minikube options for container-runtime
	I0422 18:01:44.238702  365956 config.go:182] Loaded profile config "old-k8s-version-986384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0422 18:01:44.238765  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.262357  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:44.262617  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:44.262632  365956 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 18:01:44.389358  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0422 18:01:44.389382  365956 ubuntu.go:71] root file system type: overlay
	I0422 18:01:44.389499  365956 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 18:01:44.389609  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.405634  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:44.405894  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:44.405976  365956 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 18:01:44.542465  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 18:01:44.542560  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.559302  365956 main.go:141] libmachine: Using SSH client type: native
	I0422 18:01:44.559555  365956 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I0422 18:01:44.559583  365956 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 18:01:44.694484  365956 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:01:44.694507  365956 machine.go:97] duration metric: took 4.994572397s to provisionDockerMachine
	I0422 18:01:44.694518  365956 start.go:293] postStartSetup for "old-k8s-version-986384" (driver="docker")
	I0422 18:01:44.694530  365956 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:01:44.694606  365956 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:01:44.694650  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.711053  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:44.810561  365956 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:01:44.813773  365956 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0422 18:01:44.813809  365956 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0422 18:01:44.813819  365956 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0422 18:01:44.813827  365956 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0422 18:01:44.813845  365956 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/addons for local assets ...
	I0422 18:01:44.813910  365956 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/files for local assets ...
	I0422 18:01:44.813999  365956 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem -> 77282.pem in /etc/ssl/certs
	I0422 18:01:44.814122  365956 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:01:44.823808  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0422 18:01:44.848822  365956 start.go:296] duration metric: took 154.28891ms for postStartSetup
	I0422 18:01:44.848906  365956 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 18:01:44.848955  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.864797  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:44.957771  365956 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0422 18:01:44.962440  365956 fix.go:56] duration metric: took 5.651903998s for fixHost
	I0422 18:01:44.962466  365956 start.go:83] releasing machines lock for "old-k8s-version-986384", held for 5.651954679s
	I0422 18:01:44.962544  365956 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-986384
	I0422 18:01:44.978340  365956 ssh_runner.go:195] Run: cat /version.json
	I0422 18:01:44.978394  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.978651  365956 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:01:44.978714  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:44.993618  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:44.998466  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:45.270592  365956 ssh_runner.go:195] Run: systemctl --version
	I0422 18:01:45.287904  365956 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 18:01:45.299846  365956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0422 18:01:45.332289  365956 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0422 18:01:45.332492  365956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0422 18:01:45.367976  365956 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0422 18:01:45.390876  365956 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0422 18:01:45.390959  365956 start.go:494] detecting cgroup driver to use...
	I0422 18:01:45.391008  365956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 18:01:45.391173  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:01:45.418033  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0422 18:01:45.429671  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 18:01:45.444402  365956 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 18:01:45.444555  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 18:01:45.459586  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 18:01:45.472226  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 18:01:45.485435  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 18:01:45.496591  365956 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:01:45.507785  365956 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 18:01:45.519530  365956 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:01:45.529047  365956 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:01:45.538660  365956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:01:45.631444  365956 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 18:01:45.743317  365956 start.go:494] detecting cgroup driver to use...
	I0422 18:01:45.743427  365956 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 18:01:45.743505  365956 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 18:01:45.764224  365956 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0422 18:01:45.764346  365956 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 18:01:45.786193  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:01:45.812250  365956 ssh_runner.go:195] Run: which cri-dockerd
	I0422 18:01:45.816133  365956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 18:01:45.825426  365956 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 18:01:45.845763  365956 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 18:01:45.967986  365956 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 18:01:46.075277  365956 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 18:01:46.075477  365956 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 18:01:46.096266  365956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:01:46.206944  365956 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 18:01:46.676238  365956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 18:01:46.699732  365956 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 18:01:46.724329  365956 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.0.2 ...
	I0422 18:01:46.724525  365956 cli_runner.go:164] Run: docker network inspect old-k8s-version-986384 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0422 18:01:46.740301  365956 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0422 18:01:46.744443  365956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:01:46.758844  365956 kubeadm.go:877] updating cluster {Name:old-k8s-version-986384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986384 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:01:46.758960  365956 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 18:01:46.759016  365956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 18:01:46.779981  365956 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0422 18:01:46.780002  365956 docker.go:615] Images already preloaded, skipping extraction
	I0422 18:01:46.780071  365956 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 18:01:46.799736  365956 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0422 18:01:46.799758  365956 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:01:46.799769  365956 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0422 18:01:46.799873  365956 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-986384 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986384 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:01:46.799933  365956 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0422 18:01:46.856725  365956 cni.go:84] Creating CNI manager for ""
	I0422 18:01:46.856837  365956 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0422 18:01:46.856868  365956 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:01:46.856910  365956 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-986384 NodeName:old-k8s-version-986384 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0422 18:01:46.857052  365956 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-986384"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:01:46.857123  365956 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0422 18:01:46.867038  365956 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:01:46.867110  365956 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:01:46.876381  365956 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0422 18:01:46.895376  365956 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:01:46.915854  365956 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0422 18:01:46.934485  365956 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0422 18:01:46.938113  365956 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:01:46.948943  365956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:01:47.037494  365956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:01:47.053394  365956 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384 for IP: 192.168.76.2
	I0422 18:01:47.053416  365956 certs.go:194] generating shared ca certs ...
	I0422 18:01:47.053432  365956 certs.go:226] acquiring lock for ca certs: {Name:mkc0c6170c42b1b43b7f622fcbfe2e475bd8761f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:01:47.053613  365956 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key
	I0422 18:01:47.053665  365956 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key
	I0422 18:01:47.053676  365956 certs.go:256] generating profile certs ...
	I0422 18:01:47.053781  365956 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.key
	I0422 18:01:47.053859  365956 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/apiserver.key.17f72d1b
	I0422 18:01:47.053910  365956 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/proxy-client.key
	I0422 18:01:47.054022  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728.pem (1338 bytes)
	W0422 18:01:47.054057  365956 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728_empty.pem, impossibly tiny 0 bytes
	I0422 18:01:47.054068  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:01:47.054100  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:01:47.054131  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:01:47.054159  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem (1675 bytes)
	I0422 18:01:47.054205  365956 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem (1708 bytes)
	I0422 18:01:47.054831  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:01:47.090065  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:01:47.118206  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:01:47.147005  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 18:01:47.188069  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0422 18:01:47.220115  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:01:47.264571  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:01:47.306550  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:01:47.333769  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:01:47.360909  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0422 18:01:47.387775  365956 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0422 18:01:47.414320  365956 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:01:47.432984  365956 ssh_runner.go:195] Run: openssl version
	I0422 18:01:47.438697  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0422 18:01:47.448687  365956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0422 18:01:47.452231  365956 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:02 /usr/share/ca-certificates/7728.pem
	I0422 18:01:47.452336  365956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0422 18:01:47.459952  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0422 18:01:47.469323  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0422 18:01:47.478929  365956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0422 18:01:47.482838  365956 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:02 /usr/share/ca-certificates/77282.pem
	I0422 18:01:47.482905  365956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0422 18:01:47.490030  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:01:47.500047  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:01:47.510256  365956 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:01:47.513837  365956 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:01:47.513906  365956 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:01:47.520856  365956 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:01:47.530108  365956 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:01:47.533629  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:01:47.540500  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:01:47.547454  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:01:47.554685  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:01:47.562317  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:01:47.569735  365956 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:01:47.576888  365956 kubeadm.go:391] StartCluster: {Name:old-k8s-version-986384 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-986384 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:01:47.577065  365956 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 18:01:47.592742  365956 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:01:47.602860  365956 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:01:47.602934  365956 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:01:47.602952  365956 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:01:47.603035  365956 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:01:47.615442  365956 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:01:47.616437  365956 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-986384" does not appear in /home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:01:47.617066  365956 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-2371/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-986384" cluster setting kubeconfig missing "old-k8s-version-986384" context setting]
	I0422 18:01:47.617899  365956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/kubeconfig: {Name:mkd3bbb31387c9740f072dd59bcca857246cca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:01:47.620203  365956 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:01:47.632235  365956 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0422 18:01:47.632267  365956 kubeadm.go:591] duration metric: took 29.293324ms to restartPrimaryControlPlane
	I0422 18:01:47.632277  365956 kubeadm.go:393] duration metric: took 55.398791ms to StartCluster
	I0422 18:01:47.632294  365956 settings.go:142] acquiring lock: {Name:mk4d4aae5dac6b45b6276ad1e8e6929d4ff7540f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:01:47.632356  365956 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:01:47.634051  365956 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/kubeconfig: {Name:mkd3bbb31387c9740f072dd59bcca857246cca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:01:47.634274  365956 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 18:01:47.639060  365956 out.go:177] * Verifying Kubernetes components...
	I0422 18:01:47.634878  365956 config.go:182] Loaded profile config "old-k8s-version-986384": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0422 18:01:47.634887  365956 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:01:47.641306  365956 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-986384"
	I0422 18:01:47.641345  365956 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-986384"
	W0422 18:01:47.641356  365956 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:01:47.641387  365956 host.go:66] Checking if "old-k8s-version-986384" exists ...
	I0422 18:01:47.641449  365956 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:01:47.641579  365956 addons.go:69] Setting dashboard=true in profile "old-k8s-version-986384"
	I0422 18:01:47.641654  365956 addons.go:234] Setting addon dashboard=true in "old-k8s-version-986384"
	W0422 18:01:47.641678  365956 addons.go:243] addon dashboard should already be in state true
	I0422 18:01:47.641751  365956 host.go:66] Checking if "old-k8s-version-986384" exists ...
	I0422 18:01:47.641956  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:47.642243  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:47.642435  365956 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-986384"
	I0422 18:01:47.642471  365956 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-986384"
	I0422 18:01:47.642720  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:47.642996  365956 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-986384"
	I0422 18:01:47.643024  365956 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-986384"
	W0422 18:01:47.643031  365956 addons.go:243] addon metrics-server should already be in state true
	I0422 18:01:47.643058  365956 host.go:66] Checking if "old-k8s-version-986384" exists ...
	I0422 18:01:47.643449  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:47.697815  365956 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0422 18:01:47.699622  365956 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-986384"
	I0422 18:01:47.703045  365956 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:01:47.701113  365956 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0422 18:01:47.701146  365956 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:01:47.705126  365956 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:01:47.709732  365956 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:01:47.709749  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:01:47.709810  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:47.707718  365956 host.go:66] Checking if "old-k8s-version-986384" exists ...
	I0422 18:01:47.710484  365956 cli_runner.go:164] Run: docker container inspect old-k8s-version-986384 --format={{.State.Status}}
	I0422 18:01:47.707730  365956 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0422 18:01:47.707739  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:01:47.720094  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0422 18:01:47.720117  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0422 18:01:47.720167  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:47.720097  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:47.744259  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:47.762322  365956 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:01:47.762343  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:01:47.762408  365956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-986384
	I0422 18:01:47.776864  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:47.792159  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:47.804448  365956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/old-k8s-version-986384/id_rsa Username:docker}
	I0422 18:01:47.836560  365956 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:01:47.881488  365956 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-986384" to be "Ready" ...
	I0422 18:01:47.926923  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:01:47.937327  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0422 18:01:47.937391  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0422 18:01:47.968289  365956 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:01:47.968353  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:01:47.983152  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0422 18:01:47.983218  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0422 18:01:47.990499  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:01:48.014848  365956 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:01:48.014916  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:01:48.064045  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0422 18:01:48.064129  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0422 18:01:48.076407  365956 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:48.076480  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0422 18:01:48.111513  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.111634  365956 retry.go:31] will retry after 221.919065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.136287  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0422 18:01:48.136352  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0422 18:01:48.143992  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.144071  365956 retry.go:31] will retry after 244.5935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.144404  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:48.167825  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0422 18:01:48.167908  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0422 18:01:48.191163  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0422 18:01:48.191242  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0422 18:01:48.214939  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0422 18:01:48.215016  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0422 18:01:48.236565  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0422 18:01:48.236641  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0422 18:01:48.247625  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.247709  365956 retry.go:31] will retry after 134.990489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.259655  365956 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:01:48.259682  365956 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0422 18:01:48.278553  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:01:48.334073  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0422 18:01:48.360420  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.360459  365956 retry.go:31] will retry after 168.012049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.382839  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:48.389494  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0422 18:01:48.446006  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.446042  365956 retry.go:31] will retry after 508.114621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0422 18:01:48.512221  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.512295  365956 retry.go:31] will retry after 310.441914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0422 18:01:48.512369  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.512401  365956 retry.go:31] will retry after 559.992289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.528633  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0422 18:01:48.606376  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.606409  365956 retry.go:31] will retry after 313.514426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.823761  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0422 18:01:48.908867  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.908908  365956 retry.go:31] will retry after 659.346255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:48.921028  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:01:48.954448  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0422 18:01:49.000421  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.000452  365956 retry.go:31] will retry after 611.362627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0422 18:01:49.044630  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.044672  365956 retry.go:31] will retry after 705.992523ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.072989  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0422 18:01:49.156048  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.156081  365956 retry.go:31] will retry after 694.563877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.569308  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:49.612879  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0422 18:01:49.658278  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.658352  365956 retry.go:31] will retry after 1.24959819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0422 18:01:49.711444  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.711477  365956 retry.go:31] will retry after 514.601206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.751596  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0422 18:01:49.836823  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.836857  365956 retry.go:31] will retry after 986.952798ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.851185  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:01:49.882841  365956 node_ready.go:53] error getting node "old-k8s-version-986384": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-986384": dial tcp 192.168.76.2:8443: connect: connection refused
	W0422 18:01:49.941396  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:49.941441  365956 retry.go:31] will retry after 1.105077598s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:50.226897  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0422 18:01:50.304704  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:50.304736  365956 retry.go:31] will retry after 1.687127306s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:50.824820  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0422 18:01:50.903571  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:50.903651  365956 retry.go:31] will retry after 1.031459557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:50.908796  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:51.047198  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0422 18:01:51.078384  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:51.078429  365956 retry.go:31] will retry after 975.553108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0422 18:01:51.210845  365956 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:51.210919  365956 retry.go:31] will retry after 993.966385ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0422 18:01:51.936282  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:01:51.992744  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:01:52.055031  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:01:52.205659  365956 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:01:59.939212  365956 node_ready.go:49] node "old-k8s-version-986384" has status "Ready":"True"
	I0422 18:01:59.939237  365956 node_ready.go:38] duration metric: took 12.057642675s for node "old-k8s-version-986384" to be "Ready" ...
	I0422 18:01:59.939248  365956 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:02:00.170166  365956 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-6dgwt" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:00.375642  365956 pod_ready.go:92] pod "coredns-74ff55c5b-6dgwt" in "kube-system" namespace has status "Ready":"True"
	I0422 18:02:00.375677  365956 pod_ready.go:81] duration metric: took 205.463958ms for pod "coredns-74ff55c5b-6dgwt" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:00.375702  365956 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:02:01.374865  365956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.43849539s)
	I0422 18:02:01.542490  365956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.549695822s)
	I0422 18:02:01.544275  365956 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-986384 addons enable metrics-server
	
	I0422 18:02:01.542717  365956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.487649979s)
	I0422 18:02:01.542735  365956 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.337046036s)
	I0422 18:02:01.546103  365956 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-986384"
	I0422 18:02:01.557719  365956 out.go:177] * Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	I0422 18:02:01.559492  365956 addons.go:505] duration metric: took 13.924593581s for enable addons: enabled=[storage-provisioner dashboard metrics-server default-storageclass]
	I0422 18:02:02.384099  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:04.884272  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:07.383756  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:09.883051  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:11.883123  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:13.883498  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:16.383374  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:18.383961  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:20.384120  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:22.389481  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:24.392033  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:26.883806  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:28.883892  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:31.383282  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:33.383630  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:35.884119  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:38.383770  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:40.882941  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:42.884099  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:44.884363  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:47.383411  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:49.383713  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:51.883996  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:54.383160  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:56.383305  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:02:58.383498  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:00.406400  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:02.883841  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:04.884185  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:07.384076  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:09.883869  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:12.391144  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:14.883244  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:17.383697  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:19.884198  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:22.383202  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:24.883968  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:27.383941  365956 pod_ready.go:102] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:28.883955  365956 pod_ready.go:92] pod "etcd-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"True"
	I0422 18:03:28.883979  365956 pod_ready.go:81] duration metric: took 1m28.508268719s for pod "etcd-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.883991  365956 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.889922  365956 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"True"
	I0422 18:03:28.889946  365956 pod_ready.go:81] duration metric: took 5.947145ms for pod "kube-apiserver-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.889958  365956 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.895542  365956 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"True"
	I0422 18:03:28.895567  365956 pod_ready.go:81] duration metric: took 5.600333ms for pod "kube-controller-manager-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.895580  365956 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cpcrp" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.900984  365956 pod_ready.go:92] pod "kube-proxy-cpcrp" in "kube-system" namespace has status "Ready":"True"
	I0422 18:03:28.901012  365956 pod_ready.go:81] duration metric: took 5.423493ms for pod "kube-proxy-cpcrp" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.901024  365956 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.906184  365956 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-986384" in "kube-system" namespace has status "Ready":"True"
	I0422 18:03:28.906210  365956 pod_ready.go:81] duration metric: took 5.178035ms for pod "kube-scheduler-old-k8s-version-986384" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:28.906222  365956 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace to be "Ready" ...
	I0422 18:03:30.912175  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:32.913272  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:35.413089  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:37.912834  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:40.413555  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:42.912960  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:45.413087  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:47.413864  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:49.912863  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:52.411940  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:54.412402  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:56.413208  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:03:58.913247  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:01.412533  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:03.421959  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:05.913712  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:08.412663  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:10.413368  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:12.413552  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:14.923544  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:17.412391  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:19.412606  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:21.912999  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:23.913159  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:26.411530  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:28.412742  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:30.412936  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:32.413076  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:34.912526  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:36.913095  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:39.412987  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:41.912708  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:43.912929  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:45.913610  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:48.412710  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:50.912383  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:52.914226  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:55.412312  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:57.412510  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:04:59.912746  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:01.913319  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:04.412969  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:06.912962  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:08.915891  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:11.471638  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:13.913248  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:16.413354  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:18.912373  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:21.412601  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:23.412943  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:25.913156  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:28.412895  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:30.413508  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:32.912105  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:34.913242  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:36.929207  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:39.412971  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:41.925644  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:44.414207  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:46.913461  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:49.415596  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:51.915479  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:54.413473  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:56.413528  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:05:58.421088  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:00.915869  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:03.413896  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:05.915793  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:08.412199  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:10.413320  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:12.912884  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:14.913565  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:16.914170  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:19.412938  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:21.912137  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:23.912833  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:25.912957  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:27.913044  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:30.412898  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:32.414169  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:34.911990  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:36.912348  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:39.413336  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:41.912307  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:44.413279  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:46.912140  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:49.412665  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:51.914046  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:54.412693  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:56.912275  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:58.914019  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:01.412728  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:03.911888  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:05.913589  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:07.923781  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:10.411742  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:12.415123  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:14.415686  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:16.923201  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:19.413145  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:21.913081  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:24.412193  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:26.413214  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:28.922687  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:28.922717  365956 pod_ready.go:81] duration metric: took 4m0.0164876s for pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace to be "Ready" ...
	E0422 18:07:28.922728  365956 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0422 18:07:28.922736  365956 pod_ready.go:38] duration metric: took 5m28.983477844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:07:28.922790  365956 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:07:28.922897  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0422 18:07:28.955854  365956 logs.go:276] 2 containers: [cfd818ddce4e 36734b404817]
	I0422 18:07:28.955975  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0422 18:07:28.979086  365956 logs.go:276] 2 containers: [5899397ea4a9 f029b0e7c02d]
	I0422 18:07:28.979186  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0422 18:07:28.997173  365956 logs.go:276] 2 containers: [27648c4ab762 f9d8bddf7197]
	I0422 18:07:28.997325  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0422 18:07:29.033417  365956 logs.go:276] 2 containers: [8d4da1dcae53 4ef6e29c4a78]
	I0422 18:07:29.033558  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0422 18:07:29.076926  365956 logs.go:276] 2 containers: [99181fe4786b 2670916c7c26]
	I0422 18:07:29.077015  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0422 18:07:29.096098  365956 logs.go:276] 2 containers: [980fcb0f5f0b d0fea9e4ec1f]
	I0422 18:07:29.096268  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0422 18:07:29.115469  365956 logs.go:276] 0 containers: []
	W0422 18:07:29.115495  365956 logs.go:278] No container was found matching "kindnet"
	I0422 18:07:29.115565  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0422 18:07:29.135054  365956 logs.go:276] 1 containers: [38455e350073]
	I0422 18:07:29.135166  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0422 18:07:29.157549  365956 logs.go:276] 2 containers: [2e587f1e7363 9dca99e0f1e4]
	I0422 18:07:29.157650  365956 logs.go:123] Gathering logs for container status ...
	I0422 18:07:29.157705  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:07:29.256493  365956 logs.go:123] Gathering logs for dmesg ...
	I0422 18:07:29.256681  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:07:29.279579  365956 logs.go:123] Gathering logs for kube-apiserver [36734b404817] ...
	I0422 18:07:29.279617  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36734b404817"
	I0422 18:07:29.441285  365956 logs.go:123] Gathering logs for kube-scheduler [4ef6e29c4a78] ...
	I0422 18:07:29.441326  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef6e29c4a78"
	I0422 18:07:29.486255  365956 logs.go:123] Gathering logs for kube-proxy [2670916c7c26] ...
	I0422 18:07:29.486291  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2670916c7c26"
	I0422 18:07:29.517182  365956 logs.go:123] Gathering logs for kube-controller-manager [980fcb0f5f0b] ...
	I0422 18:07:29.517212  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 980fcb0f5f0b"
	I0422 18:07:29.587467  365956 logs.go:123] Gathering logs for Docker ...
	I0422 18:07:29.587506  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0422 18:07:29.633184  365956 logs.go:123] Gathering logs for kube-apiserver [cfd818ddce4e] ...
	I0422 18:07:29.633223  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd818ddce4e"
	I0422 18:07:29.708734  365956 logs.go:123] Gathering logs for etcd [f029b0e7c02d] ...
	I0422 18:07:29.708877  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f029b0e7c02d"
	I0422 18:07:29.751034  365956 logs.go:123] Gathering logs for coredns [27648c4ab762] ...
	I0422 18:07:29.751077  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27648c4ab762"
	I0422 18:07:29.796111  365956 logs.go:123] Gathering logs for kube-proxy [99181fe4786b] ...
	I0422 18:07:29.796299  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99181fe4786b"
	I0422 18:07:29.830508  365956 logs.go:123] Gathering logs for kubelet ...
	I0422 18:07:29.830540  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0422 18:07:29.888134  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832037    1196 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g68k5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g68k5" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888394  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832152    1196 reflector.go:138] object-"kube-system"/"kube-proxy-token-f585x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f585x" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888628  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832232    1196 reflector.go:138] object-"default"/"default-token-b4l4p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-b4l4p" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888876  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832306    1196 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.889095  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832383    1196 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.889307  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832559    1196 reflector.go:138] object-"kube-system"/"coredns-token-2dv2q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2dv2q" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.898081  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.226868    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.898812  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.888305    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.901567  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:15 old-k8s-version-986384 kubelet[1196]: E0422 18:02:15.641552    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.911177  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.265785    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.911576  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.368306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.911763  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:30 old-k8s-version-986384 kubelet[1196]: E0422 18:02:30.628159    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.912531  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:32 old-k8s-version-986384 kubelet[1196]: E0422 18:02:32.453214    1196 pod_workers.go:191] Error syncing pod df339435-cb7d-470a-8aec-c5eb3f389a93 ("storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"
	W0422 18:07:29.915088  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:39 old-k8s-version-986384 kubelet[1196]: E0422 18:02:39.049200    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.918558  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:43 old-k8s-version-986384 kubelet[1196]: E0422 18:02:43.646522    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.919251  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:52 old-k8s-version-986384 kubelet[1196]: E0422 18:02:52.618806    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.919438  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:58 old-k8s-version-986384 kubelet[1196]: E0422 18:02:58.622275    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.922358  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:05 old-k8s-version-986384 kubelet[1196]: E0422 18:03:05.091231    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.922613  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:10 old-k8s-version-986384 kubelet[1196]: E0422 18:03:10.630961    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.922958  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:16 old-k8s-version-986384 kubelet[1196]: E0422 18:03:16.618514    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.923201  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:23 old-k8s-version-986384 kubelet[1196]: E0422 18:03:23.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.923405  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:30 old-k8s-version-986384 kubelet[1196]: E0422 18:03:30.632521    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.925888  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:34 old-k8s-version-986384 kubelet[1196]: E0422 18:03:34.649485    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.926096  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:43 old-k8s-version-986384 kubelet[1196]: E0422 18:03:43.618651    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.926287  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:46 old-k8s-version-986384 kubelet[1196]: E0422 18:03:46.619360    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.928592  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.083435    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.936494  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.618394    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.936812  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:12 old-k8s-version-986384 kubelet[1196]: E0422 18:04:12.618866    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937069  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:13 old-k8s-version-986384 kubelet[1196]: E0422 18:04:13.618430    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937262  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:23 old-k8s-version-986384 kubelet[1196]: E0422 18:04:23.618440    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937642  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:24 old-k8s-version-986384 kubelet[1196]: E0422 18:04:24.643322    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937831  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.629329    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938079  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.634742    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938402  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:49 old-k8s-version-986384 kubelet[1196]: E0422 18:04:49.618463    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938592  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:53 old-k8s-version-986384 kubelet[1196]: E0422 18:04:53.618422    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938788  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:00 old-k8s-version-986384 kubelet[1196]: E0422 18:05:00.619147    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.940980  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:05 old-k8s-version-986384 kubelet[1196]: E0422 18:05:05.638308    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.941308  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:11 old-k8s-version-986384 kubelet[1196]: E0422 18:05:11.618472    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.941516  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:16 old-k8s-version-986384 kubelet[1196]: E0422 18:05:16.627331    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.944352  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061150    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.944593  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:28 old-k8s-version-986384 kubelet[1196]: E0422 18:05:28.618921    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945064  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:34 old-k8s-version-986384 kubelet[1196]: E0422 18:05:34.621754    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945265  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:39 old-k8s-version-986384 kubelet[1196]: E0422 18:05:39.618114    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945464  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:45 old-k8s-version-986384 kubelet[1196]: E0422 18:05:45.663272    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945650  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:51 old-k8s-version-986384 kubelet[1196]: E0422 18:05:51.618807    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945847  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:58 old-k8s-version-986384 kubelet[1196]: E0422 18:05:58.648913    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946032  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:04 old-k8s-version-986384 kubelet[1196]: E0422 18:06:04.618373    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946230  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:11 old-k8s-version-986384 kubelet[1196]: E0422 18:06:11.618095    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946414  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:15 old-k8s-version-986384 kubelet[1196]: E0422 18:06:15.618294    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946610  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:25 old-k8s-version-986384 kubelet[1196]: E0422 18:06:25.619230    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946797  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:30 old-k8s-version-986384 kubelet[1196]: E0422 18:06:30.618689    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946994  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:39 old-k8s-version-986384 kubelet[1196]: E0422 18:06:39.618588    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947177  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:43 old-k8s-version-986384 kubelet[1196]: E0422 18:06:43.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947371  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:54 old-k8s-version-986384 kubelet[1196]: E0422 18:06:54.627082    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947554  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:55 old-k8s-version-986384 kubelet[1196]: E0422 18:06:55.618270    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947736  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947930  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948113  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948309  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948496  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0422 18:07:29.948506  365956 logs.go:123] Gathering logs for kubernetes-dashboard [38455e350073] ...
	I0422 18:07:29.948519  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38455e350073"
	I0422 18:07:29.972630  365956 logs.go:123] Gathering logs for storage-provisioner [2e587f1e7363] ...
	I0422 18:07:29.972659  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e587f1e7363"
	I0422 18:07:29.993589  365956 logs.go:123] Gathering logs for storage-provisioner [9dca99e0f1e4] ...
	I0422 18:07:29.993618  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dca99e0f1e4"
	I0422 18:07:30.030083  365956 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:07:30.030117  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0422 18:07:30.441479  365956 logs.go:123] Gathering logs for etcd [5899397ea4a9] ...
	I0422 18:07:30.441542  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5899397ea4a9"
	I0422 18:07:30.498099  365956 logs.go:123] Gathering logs for coredns [f9d8bddf7197] ...
	I0422 18:07:30.498181  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d8bddf7197"
	I0422 18:07:30.531927  365956 logs.go:123] Gathering logs for kube-scheduler [8d4da1dcae53] ...
	I0422 18:07:30.531997  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4da1dcae53"
	I0422 18:07:30.579200  365956 logs.go:123] Gathering logs for kube-controller-manager [d0fea9e4ec1f] ...
	I0422 18:07:30.579279  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fea9e4ec1f"
	I0422 18:07:30.701639  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:30.701707  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0422 18:07:30.701771  365956 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0422 18:07:30.701816  365956 out.go:239]   Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701850  365956 out.go:239]   Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701896  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701940  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701975  365956 out.go:239]   Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0422 18:07:30.702015  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:30.702037  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:40.703400  365956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:07:40.716516  365956 api_server.go:72] duration metric: took 5m53.082208744s to wait for apiserver process to appear ...
	I0422 18:07:40.716544  365956 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:07:40.716659  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0422 18:07:40.742123  365956 logs.go:276] 2 containers: [cfd818ddce4e 36734b404817]
	I0422 18:07:40.742218  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0422 18:07:40.763131  365956 logs.go:276] 2 containers: [5899397ea4a9 f029b0e7c02d]
	I0422 18:07:40.763221  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0422 18:07:40.779304  365956 logs.go:276] 2 containers: [27648c4ab762 f9d8bddf7197]
	I0422 18:07:40.779383  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0422 18:07:40.799440  365956 logs.go:276] 2 containers: [8d4da1dcae53 4ef6e29c4a78]
	I0422 18:07:40.799522  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0422 18:07:40.815299  365956 logs.go:276] 2 containers: [99181fe4786b 2670916c7c26]
	I0422 18:07:40.815385  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0422 18:07:40.831521  365956 logs.go:276] 2 containers: [980fcb0f5f0b d0fea9e4ec1f]
	I0422 18:07:40.831603  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0422 18:07:40.850909  365956 logs.go:276] 0 containers: []
	W0422 18:07:40.850987  365956 logs.go:278] No container was found matching "kindnet"
	I0422 18:07:40.851073  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0422 18:07:40.867546  365956 logs.go:276] 2 containers: [2e587f1e7363 9dca99e0f1e4]
	I0422 18:07:40.867663  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0422 18:07:40.888163  365956 logs.go:276] 1 containers: [38455e350073]
	I0422 18:07:40.888241  365956 logs.go:123] Gathering logs for kubelet ...
	I0422 18:07:40.888267  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0422 18:07:40.949225  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832037    1196 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g68k5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g68k5" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949459  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832152    1196 reflector.go:138] object-"kube-system"/"kube-proxy-token-f585x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f585x" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949670  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832232    1196 reflector.go:138] object-"default"/"default-token-b4l4p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-b4l4p" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949875  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832306    1196 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.950077  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832383    1196 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.950286  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832559    1196 reflector.go:138] object-"kube-system"/"coredns-token-2dv2q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2dv2q" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.958658  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.226868    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.959350  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.888305    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.961733  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:15 old-k8s-version-986384 kubelet[1196]: E0422 18:02:15.641552    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.966390  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.265785    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.966763  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.368306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.966950  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:30 old-k8s-version-986384 kubelet[1196]: E0422 18:02:30.628159    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.967715  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:32 old-k8s-version-986384 kubelet[1196]: E0422 18:02:32.453214    1196 pod_workers.go:191] Error syncing pod df339435-cb7d-470a-8aec-c5eb3f389a93 ("storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"
	W0422 18:07:40.970011  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:39 old-k8s-version-986384 kubelet[1196]: E0422 18:02:39.049200    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.972390  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:43 old-k8s-version-986384 kubelet[1196]: E0422 18:02:43.646522    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.972984  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:52 old-k8s-version-986384 kubelet[1196]: E0422 18:02:52.618806    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.973171  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:58 old-k8s-version-986384 kubelet[1196]: E0422 18:02:58.622275    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975367  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:05 old-k8s-version-986384 kubelet[1196]: E0422 18:03:05.091231    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.975564  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:10 old-k8s-version-986384 kubelet[1196]: E0422 18:03:10.630961    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975759  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:16 old-k8s-version-986384 kubelet[1196]: E0422 18:03:16.618514    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975944  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:23 old-k8s-version-986384 kubelet[1196]: E0422 18:03:23.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.976140  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:30 old-k8s-version-986384 kubelet[1196]: E0422 18:03:30.632521    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.978202  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:34 old-k8s-version-986384 kubelet[1196]: E0422 18:03:34.649485    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.978403  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:43 old-k8s-version-986384 kubelet[1196]: E0422 18:03:43.618651    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.978587  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:46 old-k8s-version-986384 kubelet[1196]: E0422 18:03:46.619360    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.980790  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.083435    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.980977  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.618394    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981160  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:12 old-k8s-version-986384 kubelet[1196]: E0422 18:04:12.618866    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981355  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:13 old-k8s-version-986384 kubelet[1196]: E0422 18:04:13.618430    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981537  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:23 old-k8s-version-986384 kubelet[1196]: E0422 18:04:23.618440    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981732  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:24 old-k8s-version-986384 kubelet[1196]: E0422 18:04:24.643322    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981914  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.629329    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982108  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.634742    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982305  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:49 old-k8s-version-986384 kubelet[1196]: E0422 18:04:49.618463    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982487  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:53 old-k8s-version-986384 kubelet[1196]: E0422 18:04:53.618422    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982683  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:00 old-k8s-version-986384 kubelet[1196]: E0422 18:05:00.619147    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.984722  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:05 old-k8s-version-986384 kubelet[1196]: E0422 18:05:05.638308    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.984923  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:11 old-k8s-version-986384 kubelet[1196]: E0422 18:05:11.618472    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.985106  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:16 old-k8s-version-986384 kubelet[1196]: E0422 18:05:16.627331    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987305  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061150    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.987489  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:28 old-k8s-version-986384 kubelet[1196]: E0422 18:05:28.618921    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987685  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:34 old-k8s-version-986384 kubelet[1196]: E0422 18:05:34.621754    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987869  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:39 old-k8s-version-986384 kubelet[1196]: E0422 18:05:39.618114    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988066  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:45 old-k8s-version-986384 kubelet[1196]: E0422 18:05:45.663272    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988248  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:51 old-k8s-version-986384 kubelet[1196]: E0422 18:05:51.618807    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988446  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:58 old-k8s-version-986384 kubelet[1196]: E0422 18:05:58.648913    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988650  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:04 old-k8s-version-986384 kubelet[1196]: E0422 18:06:04.618373    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988855  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:11 old-k8s-version-986384 kubelet[1196]: E0422 18:06:11.618095    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989041  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:15 old-k8s-version-986384 kubelet[1196]: E0422 18:06:15.618294    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989235  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:25 old-k8s-version-986384 kubelet[1196]: E0422 18:06:25.619230    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989420  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:30 old-k8s-version-986384 kubelet[1196]: E0422 18:06:30.618689    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989615  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:39 old-k8s-version-986384 kubelet[1196]: E0422 18:06:39.618588    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989797  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:43 old-k8s-version-986384 kubelet[1196]: E0422 18:06:43.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989992  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:54 old-k8s-version-986384 kubelet[1196]: E0422 18:06:54.627082    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990175  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:55 old-k8s-version-986384 kubelet[1196]: E0422 18:06:55.618270    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990360  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990554  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990740  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990936  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.991121  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.991317  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0422 18:07:40.991327  365956 logs.go:123] Gathering logs for etcd [f029b0e7c02d] ...
	I0422 18:07:40.991341  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f029b0e7c02d"
	I0422 18:07:41.018437  365956 logs.go:123] Gathering logs for kube-scheduler [8d4da1dcae53] ...
	I0422 18:07:41.018468  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4da1dcae53"
	I0422 18:07:41.057870  365956 logs.go:123] Gathering logs for kube-scheduler [4ef6e29c4a78] ...
	I0422 18:07:41.057901  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef6e29c4a78"
	I0422 18:07:41.085317  365956 logs.go:123] Gathering logs for kube-controller-manager [980fcb0f5f0b] ...
	I0422 18:07:41.085388  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 980fcb0f5f0b"
	I0422 18:07:41.132811  365956 logs.go:123] Gathering logs for storage-provisioner [2e587f1e7363] ...
	I0422 18:07:41.132846  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e587f1e7363"
	I0422 18:07:41.168635  365956 logs.go:123] Gathering logs for dmesg ...
	I0422 18:07:41.168664  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:07:41.188491  365956 logs.go:123] Gathering logs for etcd [5899397ea4a9] ...
	I0422 18:07:41.188519  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5899397ea4a9"
	I0422 18:07:41.211325  365956 logs.go:123] Gathering logs for coredns [27648c4ab762] ...
	I0422 18:07:41.211352  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27648c4ab762"
	I0422 18:07:41.232027  365956 logs.go:123] Gathering logs for kube-controller-manager [d0fea9e4ec1f] ...
	I0422 18:07:41.232062  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fea9e4ec1f"
	I0422 18:07:41.293576  365956 logs.go:123] Gathering logs for container status ...
	I0422 18:07:41.293611  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:07:41.343832  365956 logs.go:123] Gathering logs for kubernetes-dashboard [38455e350073] ...
	I0422 18:07:41.343859  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38455e350073"
	I0422 18:07:41.368134  365956 logs.go:123] Gathering logs for Docker ...
	I0422 18:07:41.368170  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0422 18:07:41.402656  365956 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:07:41.402691  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0422 18:07:41.574728  365956 logs.go:123] Gathering logs for kube-apiserver [cfd818ddce4e] ...
	I0422 18:07:41.574762  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd818ddce4e"
	I0422 18:07:41.620507  365956 logs.go:123] Gathering logs for kube-apiserver [36734b404817] ...
	I0422 18:07:41.620545  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36734b404817"
	I0422 18:07:41.711172  365956 logs.go:123] Gathering logs for coredns [f9d8bddf7197] ...
	I0422 18:07:41.711209  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d8bddf7197"
	I0422 18:07:41.735067  365956 logs.go:123] Gathering logs for kube-proxy [99181fe4786b] ...
	I0422 18:07:41.735100  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99181fe4786b"
	I0422 18:07:41.757564  365956 logs.go:123] Gathering logs for kube-proxy [2670916c7c26] ...
	I0422 18:07:41.757590  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2670916c7c26"
	I0422 18:07:41.780898  365956 logs.go:123] Gathering logs for storage-provisioner [9dca99e0f1e4] ...
	I0422 18:07:41.780926  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dca99e0f1e4"
	I0422 18:07:41.803638  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:41.803662  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0422 18:07:41.803703  365956 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0422 18:07:41.803717  365956 out.go:239]   Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803726  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803738  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803745  365956 out.go:239]   Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803752  365956 out.go:239]   Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0422 18:07:41.803765  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:41.803770  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:51.804813  365956 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0422 18:07:51.817041  365956 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0422 18:07:51.819583  365956 out.go:177] 
	W0422 18:07:51.821795  365956 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0422 18:07:51.821832  365956 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0422 18:07:51.821850  365956 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0422 18:07:51.821855  365956 out.go:239] * 
	* 
	W0422 18:07:51.822809  365956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:07:51.826094  365956 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-986384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-986384
helpers_test.go:235: (dbg) docker inspect old-k8s-version-986384:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea",
	        "Created": "2024-04-22T17:58:40.517486316Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366155,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-04-22T18:01:39.640415726Z",
	            "FinishedAt": "2024-04-22T18:01:38.561841571Z"
	        },
	        "Image": "sha256:c9315e0f61546d7b9630cf89252fa7f614fc966830e816cca5333df5c944376f",
	        "ResolvConfPath": "/var/lib/docker/containers/2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea/hosts",
	        "LogPath": "/var/lib/docker/containers/2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea/2b3baeee162dcca0c2b59e0d746e4ebbbecf6a0a183af319ac6f72cb05d869ea-json.log",
	        "Name": "/old-k8s-version-986384",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-986384:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-986384",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49ac286702e3a8183222e6ce1458a85ed902e496cfb485ae8806eb0558fd30c5-init/diff:/var/lib/docker/overlay2/b1699f4b68a9298b206924fbb5011a78112fb741c2187f99822d61619a4228cf/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49ac286702e3a8183222e6ce1458a85ed902e496cfb485ae8806eb0558fd30c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49ac286702e3a8183222e6ce1458a85ed902e496cfb485ae8806eb0558fd30c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49ac286702e3a8183222e6ce1458a85ed902e496cfb485ae8806eb0558fd30c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-986384",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-986384/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-986384",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-986384",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-986384",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10ecd55f00a14b021ac702e19b9f504522169e5702e6d9cb8d8136a7777cc315",
	            "SandboxKey": "/var/run/docker/netns/10ecd55f00a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-986384": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "cf59603fc3c652db8c3cd6e532939511c0fe485962acf2ce9c014504167d3c85",
	                    "EndpointID": "33a3e1a10b9b1651bf758de8b6c4e7bc65da559f1002543de64bec1727ad09e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-986384",
	                        "2b3baeee162d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986384 -n old-k8s-version-986384
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-986384 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-986384 logs -n 25: (1.384568141s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-060426 sudo                                 | kubenet-060426               | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p kubenet-060426 sudo                                 | kubenet-060426               | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p kubenet-060426 sudo find                            | kubenet-060426               | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p kubenet-060426 sudo crio                            | kubenet-060426               | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p kubenet-060426                                      | kubenet-060426               | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 17:59 UTC |
	| start   | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 17:59 UTC | 22 Apr 24 18:00 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-472320            | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:00 UTC | 22 Apr 24 18:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:00 UTC | 22 Apr 24 18:00 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-472320                 | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:00 UTC | 22 Apr 24 18:00 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:00 UTC | 22 Apr 24 18:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-986384        | old-k8s-version-986384       | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-986384                              | old-k8s-version-986384       | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-986384             | old-k8s-version-986384       | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC | 22 Apr 24 18:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-986384                              | old-k8s-version-986384       | jenkins | v1.33.0 | 22 Apr 24 18:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-472320 image list                          | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	| delete  | -p embed-certs-472320                                  | embed-certs-472320           | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	| delete  | -p                                                     | disable-driver-mounts-242268 | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:05 UTC |
	|         | disable-driver-mounts-242268                           |                              |         |         |                     |                     |
	| start   | -p no-preload-256480                                   | no-preload-256480            | jenkins | v1.33.0 | 22 Apr 24 18:05 UTC | 22 Apr 24 18:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-256480             | no-preload-256480            | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-256480                                   | no-preload-256480            | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-256480                  | no-preload-256480            | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC | 22 Apr 24 18:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-256480                                   | no-preload-256480            | jenkins | v1.33.0 | 22 Apr 24 18:06 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 18:06:59
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 18:06:59.025840  378993 out.go:291] Setting OutFile to fd 1 ...
	I0422 18:06:59.025985  378993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:06:59.025995  378993 out.go:304] Setting ErrFile to fd 2...
	I0422 18:06:59.026001  378993 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:06:59.026271  378993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 18:06:59.026654  378993 out.go:298] Setting JSON to false
	I0422 18:06:59.027744  378993 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":6566,"bootTime":1713802653,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 18:06:59.027819  378993 start.go:139] virtualization:  
	I0422 18:06:59.031020  378993 out.go:177] * [no-preload-256480] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 18:06:59.035497  378993 notify.go:220] Checking for updates...
	I0422 18:06:59.036197  378993 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 18:06:59.039392  378993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 18:06:59.041743  378993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:06:59.044304  378993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 18:06:59.046638  378993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0422 18:06:59.049079  378993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 18:06:54.412693  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:56.912275  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:58.914019  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:06:59.051932  378993 config.go:182] Loaded profile config "no-preload-256480": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 18:06:59.052461  378993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 18:06:59.072461  378993 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 18:06:59.072581  378993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 18:06:59.148042  378993 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 18:06:59.13658691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 18:06:59.148161  378993 docker.go:295] overlay module found
	I0422 18:06:59.150517  378993 out.go:177] * Using the docker driver based on existing profile
	I0422 18:06:59.152827  378993 start.go:297] selected driver: docker
	I0422 18:06:59.152847  378993 start.go:901] validating driver "docker" against &{Name:no-preload-256480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-256480 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:06:59.152944  378993 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 18:06:59.153574  378993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 18:06:59.205430  378993 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 18:06:59.19651651 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 18:06:59.205787  378993 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0422 18:06:59.205845  378993 cni.go:84] Creating CNI manager for ""
	I0422 18:06:59.205867  378993 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 18:06:59.205915  378993 start.go:340] cluster config:
	{Name:no-preload-256480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-256480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:06:59.208678  378993 out.go:177] * Starting "no-preload-256480" primary control-plane node in "no-preload-256480" cluster
	I0422 18:06:59.210982  378993 cache.go:121] Beginning downloading kic base image for docker with docker
	I0422 18:06:59.213468  378993 out.go:177] * Pulling base image v0.0.43-1713736339-18706 ...
	I0422 18:06:59.215819  378993 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 18:06:59.215907  378993 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0422 18:06:59.215966  378993 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/config.json ...
	I0422 18:06:59.216244  378993 cache.go:107] acquiring lock: {Name:mk982e0834e730075b8cf71a515bf2178e2fc860 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216326  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0422 18:06:59.216338  378993 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 99.854µs
	I0422 18:06:59.216356  378993 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0422 18:06:59.216367  378993 cache.go:107] acquiring lock: {Name:mk2d7233727885311987186362ee61a9c8d933ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216403  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 exists
	I0422 18:06:59.216413  378993 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0" took 47.146µs
	I0422 18:06:59.216419  378993 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0 succeeded
	I0422 18:06:59.216429  378993 cache.go:107] acquiring lock: {Name:mk4572706ca61dccee5fcc2b91db0197146b10c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216460  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 exists
	I0422 18:06:59.216470  378993 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0" took 42.099µs
	I0422 18:06:59.216477  378993 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0 succeeded
	I0422 18:06:59.216486  378993 cache.go:107] acquiring lock: {Name:mkb78c5efaf91d7ae547bf9edac56ac31cdd6e1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216525  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 exists
	I0422 18:06:59.216536  378993 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0" took 51.74µs
	I0422 18:06:59.216543  378993 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0 succeeded
	I0422 18:06:59.216557  378993 cache.go:107] acquiring lock: {Name:mk6d7e1aeb5fe86ca9342d550af611161dd25e6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216590  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 exists
	I0422 18:06:59.216599  378993 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0" took 43.084µs
	I0422 18:06:59.216611  378993 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0 succeeded
	I0422 18:06:59.216634  378993 cache.go:107] acquiring lock: {Name:mk11c90aaee40e4efdbf2b0b94f0909d1e296eb8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216664  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0422 18:06:59.216674  378993 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 41.886µs
	I0422 18:06:59.216681  378993 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0422 18:06:59.216690  378993 cache.go:107] acquiring lock: {Name:mkd99ac6b013b25833ffc94579a4eda7ccb9aa66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216718  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0422 18:06:59.216728  378993 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 39.392µs
	I0422 18:06:59.216734  378993 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0422 18:06:59.216746  378993 cache.go:107] acquiring lock: {Name:mk91a8a98fb15bb7356fa05a6738507d9bb85300 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.216927  378993 cache.go:115] /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0422 18:06:59.216942  378993 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 199.674µs
	I0422 18:06:59.216950  378993 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0422 18:06:59.216957  378993 cache.go:87] Successfully saved all images to host disk.
	I0422 18:06:59.237778  378993 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon, skipping pull
	I0422 18:06:59.237804  378993 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in daemon, skipping load
	I0422 18:06:59.237824  378993 cache.go:194] Successfully downloaded all kic artifacts
	I0422 18:06:59.237851  378993 start.go:360] acquireMachinesLock for no-preload-256480: {Name:mkf4592d83fa520ffb8c066a7abab0272185ded0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0422 18:06:59.237917  378993 start.go:364] duration metric: took 43.355µs to acquireMachinesLock for "no-preload-256480"
	I0422 18:06:59.237941  378993 start.go:96] Skipping create...Using existing machine configuration
	I0422 18:06:59.237955  378993 fix.go:54] fixHost starting: 
	I0422 18:06:59.238235  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:06:59.253281  378993 fix.go:112] recreateIfNeeded on no-preload-256480: state=Stopped err=<nil>
	W0422 18:06:59.253311  378993 fix.go:138] unexpected machine state, will restart: <nil>
	I0422 18:06:59.256002  378993 out.go:177] * Restarting existing docker container for "no-preload-256480" ...
	I0422 18:06:59.258293  378993 cli_runner.go:164] Run: docker start no-preload-256480
	I0422 18:06:59.568923  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:06:59.589611  378993 kic.go:430] container "no-preload-256480" state is running.
	I0422 18:06:59.589984  378993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-256480
	I0422 18:06:59.609998  378993 profile.go:143] Saving config to /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/config.json ...
	I0422 18:06:59.610228  378993 machine.go:94] provisionDockerMachine start ...
	I0422 18:06:59.610292  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:06:59.629263  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:06:59.629532  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:06:59.629546  378993 main.go:141] libmachine: About to run SSH command:
	hostname
	I0422 18:06:59.630172  378993 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46928->127.0.0.1:33144: read: connection reset by peer
	I0422 18:07:02.760455  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-256480
	
	I0422 18:07:02.760482  378993 ubuntu.go:169] provisioning hostname "no-preload-256480"
	I0422 18:07:02.760562  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:02.777291  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:02.777556  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:07:02.777576  378993 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-256480 && echo "no-preload-256480" | sudo tee /etc/hostname
	I0422 18:07:02.917775  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-256480
	
	I0422 18:07:02.917856  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:02.936031  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:02.936274  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:07:02.936296  378993 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-256480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-256480/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-256480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0422 18:07:03.078136  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:07:03.078165  378993 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18706-2371/.minikube CaCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18706-2371/.minikube}
	I0422 18:07:03.078249  378993 ubuntu.go:177] setting up certificates
	I0422 18:07:03.078267  378993 provision.go:84] configureAuth start
	I0422 18:07:03.078365  378993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-256480
	I0422 18:07:03.095183  378993 provision.go:143] copyHostCerts
	I0422 18:07:03.095279  378993 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem, removing ...
	I0422 18:07:03.095294  378993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem
	I0422 18:07:03.095384  378993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/key.pem (1675 bytes)
	I0422 18:07:03.095642  378993 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem, removing ...
	I0422 18:07:03.095658  378993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem
	I0422 18:07:03.095705  378993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/ca.pem (1078 bytes)
	I0422 18:07:03.095791  378993 exec_runner.go:144] found /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem, removing ...
	I0422 18:07:03.095797  378993 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem
	I0422 18:07:03.095825  378993 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18706-2371/.minikube/cert.pem (1123 bytes)
	I0422 18:07:03.095878  378993 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem org=jenkins.no-preload-256480 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-256480]
	I0422 18:07:03.601262  378993 provision.go:177] copyRemoteCerts
	I0422 18:07:03.601339  378993 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0422 18:07:03.601394  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:03.618494  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:03.709813  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0422 18:07:03.734343  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0422 18:07:03.760142  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0422 18:07:03.784798  378993 provision.go:87] duration metric: took 706.484995ms to configureAuth
	I0422 18:07:03.784832  378993 ubuntu.go:193] setting minikube options for container-runtime
	I0422 18:07:03.785041  378993 config.go:182] Loaded profile config "no-preload-256480": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 18:07:03.785123  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:03.801147  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:03.801400  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:07:03.801416  378993 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0422 18:07:03.925442  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0422 18:07:03.925478  378993 ubuntu.go:71] root file system type: overlay
	I0422 18:07:03.925575  378993 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0422 18:07:03.925646  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:03.940754  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:03.941078  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:07:03.941165  378993 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0422 18:07:01.412728  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:03.911888  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:04.089983  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0422 18:07:04.090106  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:04.108437  378993 main.go:141] libmachine: Using SSH client type: native
	I0422 18:07:04.108862  378993 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e90] 0x3e46f0 <nil>  [] 0s} 127.0.0.1 33144 <nil> <nil>}
	I0422 18:07:04.108891  378993 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0422 18:07:04.247191  378993 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0422 18:07:04.247214  378993 machine.go:97] duration metric: took 4.636972358s to provisionDockerMachine
	I0422 18:07:04.247227  378993 start.go:293] postStartSetup for "no-preload-256480" (driver="docker")
	I0422 18:07:04.247238  378993 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0422 18:07:04.247326  378993 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0422 18:07:04.247371  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:04.263624  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:04.354144  378993 ssh_runner.go:195] Run: cat /etc/os-release
	I0422 18:07:04.357349  378993 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0422 18:07:04.357383  378993 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0422 18:07:04.357397  378993 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0422 18:07:04.357404  378993 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0422 18:07:04.357414  378993 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/addons for local assets ...
	I0422 18:07:04.357467  378993 filesync.go:126] Scanning /home/jenkins/minikube-integration/18706-2371/.minikube/files for local assets ...
	I0422 18:07:04.357545  378993 filesync.go:149] local asset: /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem -> 77282.pem in /etc/ssl/certs
	I0422 18:07:04.357669  378993 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0422 18:07:04.366022  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0422 18:07:04.391147  378993 start.go:296] duration metric: took 143.904685ms for postStartSetup
	I0422 18:07:04.391302  378993 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 18:07:04.391359  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:04.411210  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:04.497710  378993 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0422 18:07:04.502197  378993 fix.go:56] duration metric: took 5.264241004s for fixHost
	I0422 18:07:04.502221  378993 start.go:83] releasing machines lock for "no-preload-256480", held for 5.26429198s
	I0422 18:07:04.502311  378993 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-256480
	I0422 18:07:04.519288  378993 ssh_runner.go:195] Run: cat /version.json
	I0422 18:07:04.519352  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:04.519751  378993 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0422 18:07:04.519804  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:04.538275  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:04.551678  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:04.752786  378993 ssh_runner.go:195] Run: systemctl --version
	I0422 18:07:04.758731  378993 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0422 18:07:04.763594  378993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0422 18:07:04.784484  378993 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0422 18:07:04.784629  378993 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0422 18:07:04.794285  378993 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0422 18:07:04.794316  378993 start.go:494] detecting cgroup driver to use...
	I0422 18:07:04.794346  378993 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 18:07:04.794461  378993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:07:04.811577  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0422 18:07:04.822221  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0422 18:07:04.833741  378993 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0422 18:07:04.833853  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0422 18:07:04.844125  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 18:07:04.854505  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0422 18:07:04.864654  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0422 18:07:04.875098  378993 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0422 18:07:04.884149  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0422 18:07:04.894469  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0422 18:07:04.904456  378993 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0422 18:07:04.916076  378993 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0422 18:07:04.925619  378993 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0422 18:07:04.934553  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:05.044391  378993 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0422 18:07:05.158138  378993 start.go:494] detecting cgroup driver to use...
	I0422 18:07:05.158242  378993 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0422 18:07:05.158328  378993 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0422 18:07:05.176105  378993 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0422 18:07:05.176264  378993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0422 18:07:05.190268  378993 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0422 18:07:05.208962  378993 ssh_runner.go:195] Run: which cri-dockerd
	I0422 18:07:05.212941  378993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0422 18:07:05.222384  378993 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0422 18:07:05.242403  378993 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0422 18:07:05.363395  378993 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0422 18:07:05.507430  378993 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0422 18:07:05.507594  378993 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0422 18:07:05.531150  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:05.631547  378993 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0422 18:07:06.209949  378993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0422 18:07:06.222643  378993 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0422 18:07:06.237007  378993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 18:07:06.249382  378993 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0422 18:07:06.348703  378993 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0422 18:07:06.445956  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:06.547249  378993 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0422 18:07:06.562483  378993 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0422 18:07:06.574905  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:06.692959  378993 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0422 18:07:06.787493  378993 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0422 18:07:06.787640  378993 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0422 18:07:06.792627  378993 start.go:562] Will wait 60s for crictl version
	I0422 18:07:06.792895  378993 ssh_runner.go:195] Run: which crictl
	I0422 18:07:06.799008  378993 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0422 18:07:06.848649  378993 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0422 18:07:06.848830  378993 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 18:07:06.873517  378993 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0422 18:07:06.900033  378993 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0422 18:07:06.900133  378993 cli_runner.go:164] Run: docker network inspect no-preload-256480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0422 18:07:06.915095  378993 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0422 18:07:06.918701  378993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:07:06.929703  378993 kubeadm.go:877] updating cluster {Name:no-preload-256480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-256480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0422 18:07:06.929827  378993 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 18:07:06.929872  378993 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0422 18:07:06.947603  378993 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0422 18:07:06.947626  378993 cache_images.go:84] Images are preloaded, skipping loading
	I0422 18:07:06.947639  378993 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.30.0 docker true true} ...
	I0422 18:07:06.947742  378993 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-256480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:no-preload-256480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0422 18:07:06.947808  378993 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0422 18:07:06.996382  378993 cni.go:84] Creating CNI manager for ""
	I0422 18:07:06.996410  378993 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 18:07:06.996424  378993 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0422 18:07:06.996443  378993 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-256480 NodeName:no-preload-256480 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0422 18:07:06.996584  378993 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-256480"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0422 18:07:06.996648  378993 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0422 18:07:07.009449  378993 binaries.go:44] Found k8s binaries, skipping transfer
	I0422 18:07:07.009523  378993 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0422 18:07:07.019402  378993 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0422 18:07:07.039350  378993 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0422 18:07:07.058866  378993 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0422 18:07:07.078460  378993 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0422 18:07:07.082256  378993 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0422 18:07:07.093242  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:07.189690  378993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:07.206606  378993 certs.go:68] Setting up /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480 for IP: 192.168.85.2
	I0422 18:07:07.206627  378993 certs.go:194] generating shared ca certs ...
	I0422 18:07:07.206644  378993 certs.go:226] acquiring lock for ca certs: {Name:mkc0c6170c42b1b43b7f622fcbfe2e475bd8761f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:07.206789  378993 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key
	I0422 18:07:07.206840  378993 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key
	I0422 18:07:07.206850  378993 certs.go:256] generating profile certs ...
	I0422 18:07:07.206938  378993 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/client.key
	I0422 18:07:07.207014  378993 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/apiserver.key.08922ffc
	I0422 18:07:07.207058  378993 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/proxy-client.key
	I0422 18:07:07.207179  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728.pem (1338 bytes)
	W0422 18:07:07.207216  378993 certs.go:480] ignoring /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728_empty.pem, impossibly tiny 0 bytes
	I0422 18:07:07.207234  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca-key.pem (1675 bytes)
	I0422 18:07:07.207264  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/ca.pem (1078 bytes)
	I0422 18:07:07.207290  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/cert.pem (1123 bytes)
	I0422 18:07:07.207318  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/certs/key.pem (1675 bytes)
	I0422 18:07:07.207363  378993 certs.go:484] found cert: /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem (1708 bytes)
	I0422 18:07:07.207986  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0422 18:07:07.240895  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0422 18:07:07.268377  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0422 18:07:07.314673  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0422 18:07:07.346310  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0422 18:07:07.379956  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0422 18:07:07.421360  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0422 18:07:07.452127  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/no-preload-256480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0422 18:07:07.481712  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/ssl/certs/77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0422 18:07:07.513352  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0422 18:07:07.543708  378993 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18706-2371/.minikube/certs/7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0422 18:07:07.581860  378993 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0422 18:07:07.606585  378993 ssh_runner.go:195] Run: openssl version
	I0422 18:07:07.612317  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0422 18:07:07.623773  378993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0422 18:07:07.627489  378993 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 22 17:02 /usr/share/ca-certificates/7728.pem
	I0422 18:07:07.627560  378993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0422 18:07:07.634500  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0422 18:07:07.644680  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0422 18:07:07.655158  378993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0422 18:07:07.658847  378993 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 22 17:02 /usr/share/ca-certificates/77282.pem
	I0422 18:07:07.658904  378993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0422 18:07:07.667556  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0422 18:07:07.676905  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0422 18:07:07.686488  378993 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:07.690107  378993 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 22 16:57 /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:07.690227  378993 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0422 18:07:07.697614  378993 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0422 18:07:07.708744  378993 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0422 18:07:07.712302  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0422 18:07:07.719148  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0422 18:07:07.726314  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0422 18:07:07.733183  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0422 18:07:07.740204  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0422 18:07:07.747175  378993 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0422 18:07:07.754200  378993 kubeadm.go:391] StartCluster: {Name:no-preload-256480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:no-preload-256480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 18:07:07.754362  378993 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0422 18:07:07.771815  378993 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0422 18:07:07.781400  378993 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0422 18:07:07.781430  378993 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0422 18:07:07.781436  378993 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0422 18:07:07.781489  378993 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0422 18:07:07.791036  378993 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0422 18:07:07.791690  378993 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-256480" does not appear in /home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:07:07.791999  378993 kubeconfig.go:62] /home/jenkins/minikube-integration/18706-2371/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-256480" cluster setting kubeconfig missing "no-preload-256480" context setting]
	I0422 18:07:07.792536  378993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/kubeconfig: {Name:mkd3bbb31387c9740f072dd59bcca857246cca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:07.794016  378993 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0422 18:07:07.802881  378993 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0422 18:07:07.802964  378993 kubeadm.go:591] duration metric: took 21.522669ms to restartPrimaryControlPlane
	I0422 18:07:07.802981  378993 kubeadm.go:393] duration metric: took 48.798009ms to StartCluster
	I0422 18:07:07.802998  378993 settings.go:142] acquiring lock: {Name:mk4d4aae5dac6b45b6276ad1e8e6929d4ff7540f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:07.803062  378993 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 18:07:07.803977  378993 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18706-2371/kubeconfig: {Name:mkd3bbb31387c9740f072dd59bcca857246cca69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0422 18:07:07.804199  378993 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0422 18:07:07.808369  378993 out.go:177] * Verifying Kubernetes components...
	I0422 18:07:07.804575  378993 config.go:182] Loaded profile config "no-preload-256480": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 18:07:07.804600  378993 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0422 18:07:07.810581  378993 addons.go:69] Setting storage-provisioner=true in profile "no-preload-256480"
	I0422 18:07:07.810610  378993 addons.go:234] Setting addon storage-provisioner=true in "no-preload-256480"
	W0422 18:07:07.810618  378993 addons.go:243] addon storage-provisioner should already be in state true
	I0422 18:07:07.810645  378993 host.go:66] Checking if "no-preload-256480" exists ...
	I0422 18:07:07.810646  378993 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0422 18:07:07.810762  378993 addons.go:69] Setting dashboard=true in profile "no-preload-256480"
	I0422 18:07:07.810786  378993 addons.go:234] Setting addon dashboard=true in "no-preload-256480"
	W0422 18:07:07.810793  378993 addons.go:243] addon dashboard should already be in state true
	I0422 18:07:07.810819  378993 host.go:66] Checking if "no-preload-256480" exists ...
	I0422 18:07:07.811077  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:07:07.811207  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:07:07.811581  378993 addons.go:69] Setting default-storageclass=true in profile "no-preload-256480"
	I0422 18:07:07.811613  378993 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-256480"
	I0422 18:07:07.811869  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:07:07.812137  378993 addons.go:69] Setting metrics-server=true in profile "no-preload-256480"
	I0422 18:07:07.812165  378993 addons.go:234] Setting addon metrics-server=true in "no-preload-256480"
	W0422 18:07:07.812172  378993 addons.go:243] addon metrics-server should already be in state true
	I0422 18:07:07.812196  378993 host.go:66] Checking if "no-preload-256480" exists ...
	I0422 18:07:07.812570  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:07:07.856328  378993 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0422 18:07:07.859511  378993 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0422 18:07:07.859533  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0422 18:07:07.865720  378993 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0422 18:07:07.860321  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:07.868291  378993 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:07:07.868306  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0422 18:07:07.868372  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:07.876884  378993 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0422 18:07:07.880883  378993 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0422 18:07:07.884871  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0422 18:07:07.884895  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0422 18:07:07.884963  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:07.885314  378993 addons.go:234] Setting addon default-storageclass=true in "no-preload-256480"
	W0422 18:07:07.885335  378993 addons.go:243] addon default-storageclass should already be in state true
	I0422 18:07:07.885370  378993 host.go:66] Checking if "no-preload-256480" exists ...
	I0422 18:07:07.885757  378993 cli_runner.go:164] Run: docker container inspect no-preload-256480 --format={{.State.Status}}
	I0422 18:07:07.932384  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:07.938414  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:07.948019  378993 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0422 18:07:07.948041  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0422 18:07:07.948102  378993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-256480
	I0422 18:07:07.956875  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:07.980935  378993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33144 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/no-preload-256480/id_rsa Username:docker}
	I0422 18:07:08.045336  378993 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0422 18:07:08.146279  378993 node_ready.go:35] waiting up to 6m0s for node "no-preload-256480" to be "Ready" ...
	I0422 18:07:08.183520  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0422 18:07:08.183587  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0422 18:07:08.256648  378993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:07:08.283373  378993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0422 18:07:08.311638  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0422 18:07:08.311702  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0422 18:07:08.359002  378993 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0422 18:07:08.359033  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0422 18:07:08.447564  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0422 18:07:08.447591  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0422 18:07:08.514407  378993 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0422 18:07:08.514450  378993 retry.go:31] will retry after 139.659086ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0422 18:07:08.519784  378993 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0422 18:07:08.519810  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0422 18:07:08.655160  378993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0422 18:07:08.693019  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0422 18:07:08.693083  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0422 18:07:08.789452  378993 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:07:08.789527  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0422 18:07:05.913589  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:07.923781  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:09.078349  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0422 18:07:09.078378  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0422 18:07:09.097149  378993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0422 18:07:09.252080  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0422 18:07:09.252168  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0422 18:07:09.382205  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0422 18:07:09.382232  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0422 18:07:09.416713  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0422 18:07:09.416812  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0422 18:07:09.468461  378993 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:07:09.468533  378993 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0422 18:07:09.505459  378993 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0422 18:07:12.975005  378993 node_ready.go:49] node "no-preload-256480" has status "Ready":"True"
	I0422 18:07:12.975029  378993 node_ready.go:38] duration metric: took 4.828707244s for node "no-preload-256480" to be "Ready" ...
	I0422 18:07:12.975040  378993 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:07:13.053150  378993 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-r6nxm" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.093462  378993 pod_ready.go:92] pod "coredns-7db6d8ff4d-r6nxm" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.093537  378993 pod_ready.go:81] duration metric: took 40.300992ms for pod "coredns-7db6d8ff4d-r6nxm" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.093578  378993 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.099364  378993 pod_ready.go:92] pod "etcd-no-preload-256480" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.099443  378993 pod_ready.go:81] duration metric: took 5.831036ms for pod "etcd-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.099470  378993 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.108941  378993 pod_ready.go:92] pod "kube-apiserver-no-preload-256480" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.109005  378993 pod_ready.go:81] duration metric: took 9.4926ms for pod "kube-apiserver-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.109034  378993 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.129099  378993 pod_ready.go:92] pod "kube-controller-manager-no-preload-256480" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.129171  378993 pod_ready.go:81] duration metric: took 20.11686ms for pod "kube-controller-manager-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.129196  378993 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjb47" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.178338  378993 pod_ready.go:92] pod "kube-proxy-mjb47" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.178369  378993 pod_ready.go:81] duration metric: took 49.151165ms for pod "kube-proxy-mjb47" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.178381  378993 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.581822  378993 pod_ready.go:92] pod "kube-scheduler-no-preload-256480" in "kube-system" namespace has status "Ready":"True"
	I0422 18:07:13.581854  378993 pod_ready.go:81] duration metric: took 403.464218ms for pod "kube-scheduler-no-preload-256480" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:13.581867  378993 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace to be "Ready" ...
	I0422 18:07:10.411742  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:12.415123  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:15.588901  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:15.632423  378993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.349011124s)
	I0422 18:07:15.632482  378993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.977242803s)
	I0422 18:07:15.887247  378993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.790041428s)
	I0422 18:07:15.887283  378993 addons.go:470] Verifying addon metrics-server=true in "no-preload-256480"
	I0422 18:07:15.887387  378993 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.381850954s)
	I0422 18:07:15.889532  378993 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-256480 addons enable metrics-server
	
	I0422 18:07:15.891850  378993 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0422 18:07:15.893951  378993 addons.go:505] duration metric: took 8.089348306s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0422 18:07:18.088014  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:14.415686  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:16.923201  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:20.587890  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:22.593010  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:19.413145  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:21.913081  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:25.088259  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:27.610504  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:24.412193  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:26.413214  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:28.922687  365956 pod_ready.go:102] pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:28.922717  365956 pod_ready.go:81] duration metric: took 4m0.0164876s for pod "metrics-server-9975d5f86-ts7pq" in "kube-system" namespace to be "Ready" ...
	E0422 18:07:28.922728  365956 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0422 18:07:28.922736  365956 pod_ready.go:38] duration metric: took 5m28.983477844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0422 18:07:28.922790  365956 api_server.go:52] waiting for apiserver process to appear ...
	I0422 18:07:28.922897  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0422 18:07:28.955854  365956 logs.go:276] 2 containers: [cfd818ddce4e 36734b404817]
	I0422 18:07:28.955975  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0422 18:07:28.979086  365956 logs.go:276] 2 containers: [5899397ea4a9 f029b0e7c02d]
	I0422 18:07:28.979186  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0422 18:07:28.997173  365956 logs.go:276] 2 containers: [27648c4ab762 f9d8bddf7197]
	I0422 18:07:28.997325  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0422 18:07:29.033417  365956 logs.go:276] 2 containers: [8d4da1dcae53 4ef6e29c4a78]
	I0422 18:07:29.033558  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0422 18:07:29.076926  365956 logs.go:276] 2 containers: [99181fe4786b 2670916c7c26]
	I0422 18:07:29.077015  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0422 18:07:29.096098  365956 logs.go:276] 2 containers: [980fcb0f5f0b d0fea9e4ec1f]
	I0422 18:07:29.096268  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0422 18:07:30.092265  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:32.588159  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:29.115469  365956 logs.go:276] 0 containers: []
	W0422 18:07:29.115495  365956 logs.go:278] No container was found matching "kindnet"
	I0422 18:07:29.115565  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0422 18:07:29.135054  365956 logs.go:276] 1 containers: [38455e350073]
	I0422 18:07:29.135166  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0422 18:07:29.157549  365956 logs.go:276] 2 containers: [2e587f1e7363 9dca99e0f1e4]
	I0422 18:07:29.157650  365956 logs.go:123] Gathering logs for container status ...
	I0422 18:07:29.157705  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:07:29.256493  365956 logs.go:123] Gathering logs for dmesg ...
	I0422 18:07:29.256681  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:07:29.279579  365956 logs.go:123] Gathering logs for kube-apiserver [36734b404817] ...
	I0422 18:07:29.279617  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36734b404817"
	I0422 18:07:29.441285  365956 logs.go:123] Gathering logs for kube-scheduler [4ef6e29c4a78] ...
	I0422 18:07:29.441326  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef6e29c4a78"
	I0422 18:07:29.486255  365956 logs.go:123] Gathering logs for kube-proxy [2670916c7c26] ...
	I0422 18:07:29.486291  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2670916c7c26"
	I0422 18:07:29.517182  365956 logs.go:123] Gathering logs for kube-controller-manager [980fcb0f5f0b] ...
	I0422 18:07:29.517212  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 980fcb0f5f0b"
	I0422 18:07:29.587467  365956 logs.go:123] Gathering logs for Docker ...
	I0422 18:07:29.587506  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0422 18:07:29.633184  365956 logs.go:123] Gathering logs for kube-apiserver [cfd818ddce4e] ...
	I0422 18:07:29.633223  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd818ddce4e"
	I0422 18:07:29.708734  365956 logs.go:123] Gathering logs for etcd [f029b0e7c02d] ...
	I0422 18:07:29.708877  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f029b0e7c02d"
	I0422 18:07:29.751034  365956 logs.go:123] Gathering logs for coredns [27648c4ab762] ...
	I0422 18:07:29.751077  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27648c4ab762"
	I0422 18:07:29.796111  365956 logs.go:123] Gathering logs for kube-proxy [99181fe4786b] ...
	I0422 18:07:29.796299  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99181fe4786b"
	I0422 18:07:29.830508  365956 logs.go:123] Gathering logs for kubelet ...
	I0422 18:07:29.830540  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0422 18:07:29.888134  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832037    1196 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g68k5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g68k5" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888394  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832152    1196 reflector.go:138] object-"kube-system"/"kube-proxy-token-f585x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f585x" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888628  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832232    1196 reflector.go:138] object-"default"/"default-token-b4l4p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-b4l4p" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.888876  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832306    1196 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.889095  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832383    1196 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.889307  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832559    1196 reflector.go:138] object-"kube-system"/"coredns-token-2dv2q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2dv2q" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:29.898081  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.226868    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.898812  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.888305    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.901567  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:15 old-k8s-version-986384 kubelet[1196]: E0422 18:02:15.641552    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.911177  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.265785    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.911576  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.368306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.911763  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:30 old-k8s-version-986384 kubelet[1196]: E0422 18:02:30.628159    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.912531  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:32 old-k8s-version-986384 kubelet[1196]: E0422 18:02:32.453214    1196 pod_workers.go:191] Error syncing pod df339435-cb7d-470a-8aec-c5eb3f389a93 ("storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"
	W0422 18:07:29.915088  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:39 old-k8s-version-986384 kubelet[1196]: E0422 18:02:39.049200    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.918558  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:43 old-k8s-version-986384 kubelet[1196]: E0422 18:02:43.646522    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.919251  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:52 old-k8s-version-986384 kubelet[1196]: E0422 18:02:52.618806    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.919438  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:58 old-k8s-version-986384 kubelet[1196]: E0422 18:02:58.622275    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.922358  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:05 old-k8s-version-986384 kubelet[1196]: E0422 18:03:05.091231    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.922613  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:10 old-k8s-version-986384 kubelet[1196]: E0422 18:03:10.630961    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.922958  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:16 old-k8s-version-986384 kubelet[1196]: E0422 18:03:16.618514    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.923201  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:23 old-k8s-version-986384 kubelet[1196]: E0422 18:03:23.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.923405  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:30 old-k8s-version-986384 kubelet[1196]: E0422 18:03:30.632521    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.925888  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:34 old-k8s-version-986384 kubelet[1196]: E0422 18:03:34.649485    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.926096  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:43 old-k8s-version-986384 kubelet[1196]: E0422 18:03:43.618651    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.926287  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:46 old-k8s-version-986384 kubelet[1196]: E0422 18:03:46.619360    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.928592  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.083435    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.936494  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.618394    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.936812  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:12 old-k8s-version-986384 kubelet[1196]: E0422 18:04:12.618866    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937069  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:13 old-k8s-version-986384 kubelet[1196]: E0422 18:04:13.618430    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937262  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:23 old-k8s-version-986384 kubelet[1196]: E0422 18:04:23.618440    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937642  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:24 old-k8s-version-986384 kubelet[1196]: E0422 18:04:24.643322    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.937831  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.629329    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938079  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.634742    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938402  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:49 old-k8s-version-986384 kubelet[1196]: E0422 18:04:49.618463    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938592  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:53 old-k8s-version-986384 kubelet[1196]: E0422 18:04:53.618422    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.938788  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:00 old-k8s-version-986384 kubelet[1196]: E0422 18:05:00.619147    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.940980  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:05 old-k8s-version-986384 kubelet[1196]: E0422 18:05:05.638308    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:29.941308  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:11 old-k8s-version-986384 kubelet[1196]: E0422 18:05:11.618472    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.941516  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:16 old-k8s-version-986384 kubelet[1196]: E0422 18:05:16.627331    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.944352  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061150    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:29.944593  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:28 old-k8s-version-986384 kubelet[1196]: E0422 18:05:28.618921    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945064  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:34 old-k8s-version-986384 kubelet[1196]: E0422 18:05:34.621754    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945265  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:39 old-k8s-version-986384 kubelet[1196]: E0422 18:05:39.618114    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945464  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:45 old-k8s-version-986384 kubelet[1196]: E0422 18:05:45.663272    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945650  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:51 old-k8s-version-986384 kubelet[1196]: E0422 18:05:51.618807    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.945847  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:58 old-k8s-version-986384 kubelet[1196]: E0422 18:05:58.648913    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946032  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:04 old-k8s-version-986384 kubelet[1196]: E0422 18:06:04.618373    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946230  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:11 old-k8s-version-986384 kubelet[1196]: E0422 18:06:11.618095    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946414  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:15 old-k8s-version-986384 kubelet[1196]: E0422 18:06:15.618294    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946610  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:25 old-k8s-version-986384 kubelet[1196]: E0422 18:06:25.619230    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946797  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:30 old-k8s-version-986384 kubelet[1196]: E0422 18:06:30.618689    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.946994  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:39 old-k8s-version-986384 kubelet[1196]: E0422 18:06:39.618588    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947177  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:43 old-k8s-version-986384 kubelet[1196]: E0422 18:06:43.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947371  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:54 old-k8s-version-986384 kubelet[1196]: E0422 18:06:54.627082    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947554  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:55 old-k8s-version-986384 kubelet[1196]: E0422 18:06:55.618270    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947736  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.947930  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948113  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948309  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:29.948496  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0422 18:07:29.948506  365956 logs.go:123] Gathering logs for kubernetes-dashboard [38455e350073] ...
	I0422 18:07:29.948519  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38455e350073"
	I0422 18:07:29.972630  365956 logs.go:123] Gathering logs for storage-provisioner [2e587f1e7363] ...
	I0422 18:07:29.972659  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e587f1e7363"
	I0422 18:07:29.993589  365956 logs.go:123] Gathering logs for storage-provisioner [9dca99e0f1e4] ...
	I0422 18:07:29.993618  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dca99e0f1e4"
	I0422 18:07:30.030083  365956 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:07:30.030117  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0422 18:07:30.441479  365956 logs.go:123] Gathering logs for etcd [5899397ea4a9] ...
	I0422 18:07:30.441542  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5899397ea4a9"
	I0422 18:07:30.498099  365956 logs.go:123] Gathering logs for coredns [f9d8bddf7197] ...
	I0422 18:07:30.498181  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d8bddf7197"
	I0422 18:07:30.531927  365956 logs.go:123] Gathering logs for kube-scheduler [8d4da1dcae53] ...
	I0422 18:07:30.531997  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4da1dcae53"
	I0422 18:07:30.579200  365956 logs.go:123] Gathering logs for kube-controller-manager [d0fea9e4ec1f] ...
	I0422 18:07:30.579279  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fea9e4ec1f"
	I0422 18:07:30.701639  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:30.701707  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0422 18:07:30.701771  365956 out.go:239] X Problems detected in kubelet:
	W0422 18:07:30.701816  365956 out.go:239]   Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701850  365956 out.go:239]   Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701896  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701940  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:30.701975  365956 out.go:239]   Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0422 18:07:30.702015  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:30.702037  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:34.591743  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:37.088502  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:39.587820  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:41.588603  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:40.703400  365956 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 18:07:40.716516  365956 api_server.go:72] duration metric: took 5m53.082208744s to wait for apiserver process to appear ...
	I0422 18:07:40.716544  365956 api_server.go:88] waiting for apiserver healthz status ...
	I0422 18:07:40.716659  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0422 18:07:40.742123  365956 logs.go:276] 2 containers: [cfd818ddce4e 36734b404817]
	I0422 18:07:40.742218  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0422 18:07:40.763131  365956 logs.go:276] 2 containers: [5899397ea4a9 f029b0e7c02d]
	I0422 18:07:40.763221  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0422 18:07:40.779304  365956 logs.go:276] 2 containers: [27648c4ab762 f9d8bddf7197]
	I0422 18:07:40.779383  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0422 18:07:40.799440  365956 logs.go:276] 2 containers: [8d4da1dcae53 4ef6e29c4a78]
	I0422 18:07:40.799522  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0422 18:07:40.815299  365956 logs.go:276] 2 containers: [99181fe4786b 2670916c7c26]
	I0422 18:07:40.815385  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0422 18:07:40.831521  365956 logs.go:276] 2 containers: [980fcb0f5f0b d0fea9e4ec1f]
	I0422 18:07:40.831603  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0422 18:07:40.850909  365956 logs.go:276] 0 containers: []
	W0422 18:07:40.850987  365956 logs.go:278] No container was found matching "kindnet"
	I0422 18:07:40.851073  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0422 18:07:40.867546  365956 logs.go:276] 2 containers: [2e587f1e7363 9dca99e0f1e4]
	I0422 18:07:40.867663  365956 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0422 18:07:40.888163  365956 logs.go:276] 1 containers: [38455e350073]
	I0422 18:07:40.888241  365956 logs.go:123] Gathering logs for kubelet ...
	I0422 18:07:40.888267  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0422 18:07:40.949225  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832037    1196 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g68k5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g68k5" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949459  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832152    1196 reflector.go:138] object-"kube-system"/"kube-proxy-token-f585x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f585x" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949670  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832232    1196 reflector.go:138] object-"default"/"default-token-b4l4p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-b4l4p" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.949875  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832306    1196 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.950077  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832383    1196 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.950286  365956 logs.go:138] Found kubelet problem: Apr 22 18:01:59 old-k8s-version-986384 kubelet[1196]: E0422 18:01:59.832559    1196 reflector.go:138] object-"kube-system"/"coredns-token-2dv2q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2dv2q" is forbidden: User "system:node:old-k8s-version-986384" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-986384' and this object
	W0422 18:07:40.958658  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.226868    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.959350  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:03 old-k8s-version-986384 kubelet[1196]: E0422 18:02:03.888305    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.961733  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:15 old-k8s-version-986384 kubelet[1196]: E0422 18:02:15.641552    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.966390  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.265785    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.966763  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:24 old-k8s-version-986384 kubelet[1196]: E0422 18:02:24.368306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.966950  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:30 old-k8s-version-986384 kubelet[1196]: E0422 18:02:30.628159    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.967715  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:32 old-k8s-version-986384 kubelet[1196]: E0422 18:02:32.453214    1196 pod_workers.go:191] Error syncing pod df339435-cb7d-470a-8aec-c5eb3f389a93 ("storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(df339435-cb7d-470a-8aec-c5eb3f389a93)"
	W0422 18:07:40.970011  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:39 old-k8s-version-986384 kubelet[1196]: E0422 18:02:39.049200    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.972390  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:43 old-k8s-version-986384 kubelet[1196]: E0422 18:02:43.646522    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.972984  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:52 old-k8s-version-986384 kubelet[1196]: E0422 18:02:52.618806    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.973171  365956 logs.go:138] Found kubelet problem: Apr 22 18:02:58 old-k8s-version-986384 kubelet[1196]: E0422 18:02:58.622275    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975367  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:05 old-k8s-version-986384 kubelet[1196]: E0422 18:03:05.091231    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.975564  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:10 old-k8s-version-986384 kubelet[1196]: E0422 18:03:10.630961    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975759  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:16 old-k8s-version-986384 kubelet[1196]: E0422 18:03:16.618514    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.975944  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:23 old-k8s-version-986384 kubelet[1196]: E0422 18:03:23.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.976140  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:30 old-k8s-version-986384 kubelet[1196]: E0422 18:03:30.632521    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.978202  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:34 old-k8s-version-986384 kubelet[1196]: E0422 18:03:34.649485    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.978403  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:43 old-k8s-version-986384 kubelet[1196]: E0422 18:03:43.618651    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.978587  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:46 old-k8s-version-986384 kubelet[1196]: E0422 18:03:46.619360    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.980790  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.083435    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.980977  365956 logs.go:138] Found kubelet problem: Apr 22 18:03:59 old-k8s-version-986384 kubelet[1196]: E0422 18:03:59.618394    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981160  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:12 old-k8s-version-986384 kubelet[1196]: E0422 18:04:12.618866    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981355  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:13 old-k8s-version-986384 kubelet[1196]: E0422 18:04:13.618430    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981537  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:23 old-k8s-version-986384 kubelet[1196]: E0422 18:04:23.618440    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981732  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:24 old-k8s-version-986384 kubelet[1196]: E0422 18:04:24.643322    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.981914  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.629329    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982108  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:38 old-k8s-version-986384 kubelet[1196]: E0422 18:04:38.634742    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982305  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:49 old-k8s-version-986384 kubelet[1196]: E0422 18:04:49.618463    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982487  365956 logs.go:138] Found kubelet problem: Apr 22 18:04:53 old-k8s-version-986384 kubelet[1196]: E0422 18:04:53.618422    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.982683  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:00 old-k8s-version-986384 kubelet[1196]: E0422 18:05:00.619147    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.984722  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:05 old-k8s-version-986384 kubelet[1196]: E0422 18:05:05.638308    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0422 18:07:40.984923  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:11 old-k8s-version-986384 kubelet[1196]: E0422 18:05:11.618472    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.985106  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:16 old-k8s-version-986384 kubelet[1196]: E0422 18:05:16.627331    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987305  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061150    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0422 18:07:40.987489  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:28 old-k8s-version-986384 kubelet[1196]: E0422 18:05:28.618921    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987685  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:34 old-k8s-version-986384 kubelet[1196]: E0422 18:05:34.621754    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.987869  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:39 old-k8s-version-986384 kubelet[1196]: E0422 18:05:39.618114    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988066  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:45 old-k8s-version-986384 kubelet[1196]: E0422 18:05:45.663272    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988248  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:51 old-k8s-version-986384 kubelet[1196]: E0422 18:05:51.618807    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988446  365956 logs.go:138] Found kubelet problem: Apr 22 18:05:58 old-k8s-version-986384 kubelet[1196]: E0422 18:05:58.648913    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988650  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:04 old-k8s-version-986384 kubelet[1196]: E0422 18:06:04.618373    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.988855  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:11 old-k8s-version-986384 kubelet[1196]: E0422 18:06:11.618095    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989041  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:15 old-k8s-version-986384 kubelet[1196]: E0422 18:06:15.618294    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989235  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:25 old-k8s-version-986384 kubelet[1196]: E0422 18:06:25.619230    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989420  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:30 old-k8s-version-986384 kubelet[1196]: E0422 18:06:30.618689    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989615  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:39 old-k8s-version-986384 kubelet[1196]: E0422 18:06:39.618588    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989797  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:43 old-k8s-version-986384 kubelet[1196]: E0422 18:06:43.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.989992  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:54 old-k8s-version-986384 kubelet[1196]: E0422 18:06:54.627082    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990175  365956 logs.go:138] Found kubelet problem: Apr 22 18:06:55 old-k8s-version-986384 kubelet[1196]: E0422 18:06:55.618270    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990360  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990554  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990740  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.990936  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.991121  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:40.991317  365956 logs.go:138] Found kubelet problem: Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0422 18:07:40.991327  365956 logs.go:123] Gathering logs for etcd [f029b0e7c02d] ...
	I0422 18:07:40.991341  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f029b0e7c02d"
	I0422 18:07:41.018437  365956 logs.go:123] Gathering logs for kube-scheduler [8d4da1dcae53] ...
	I0422 18:07:41.018468  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4da1dcae53"
	I0422 18:07:41.057870  365956 logs.go:123] Gathering logs for kube-scheduler [4ef6e29c4a78] ...
	I0422 18:07:41.057901  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4ef6e29c4a78"
	I0422 18:07:41.085317  365956 logs.go:123] Gathering logs for kube-controller-manager [980fcb0f5f0b] ...
	I0422 18:07:41.085388  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 980fcb0f5f0b"
	I0422 18:07:41.132811  365956 logs.go:123] Gathering logs for storage-provisioner [2e587f1e7363] ...
	I0422 18:07:41.132846  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2e587f1e7363"
	I0422 18:07:41.168635  365956 logs.go:123] Gathering logs for dmesg ...
	I0422 18:07:41.168664  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0422 18:07:41.188491  365956 logs.go:123] Gathering logs for etcd [5899397ea4a9] ...
	I0422 18:07:41.188519  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5899397ea4a9"
	I0422 18:07:41.211325  365956 logs.go:123] Gathering logs for coredns [27648c4ab762] ...
	I0422 18:07:41.211352  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 27648c4ab762"
	I0422 18:07:41.232027  365956 logs.go:123] Gathering logs for kube-controller-manager [d0fea9e4ec1f] ...
	I0422 18:07:41.232062  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d0fea9e4ec1f"
	I0422 18:07:41.293576  365956 logs.go:123] Gathering logs for container status ...
	I0422 18:07:41.293611  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0422 18:07:41.343832  365956 logs.go:123] Gathering logs for kubernetes-dashboard [38455e350073] ...
	I0422 18:07:41.343859  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 38455e350073"
	I0422 18:07:41.368134  365956 logs.go:123] Gathering logs for Docker ...
	I0422 18:07:41.368170  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0422 18:07:41.402656  365956 logs.go:123] Gathering logs for describe nodes ...
	I0422 18:07:41.402691  365956 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0422 18:07:41.574728  365956 logs.go:123] Gathering logs for kube-apiserver [cfd818ddce4e] ...
	I0422 18:07:41.574762  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cfd818ddce4e"
	I0422 18:07:41.620507  365956 logs.go:123] Gathering logs for kube-apiserver [36734b404817] ...
	I0422 18:07:41.620545  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36734b404817"
	I0422 18:07:41.711172  365956 logs.go:123] Gathering logs for coredns [f9d8bddf7197] ...
	I0422 18:07:41.711209  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9d8bddf7197"
	I0422 18:07:41.735067  365956 logs.go:123] Gathering logs for kube-proxy [99181fe4786b] ...
	I0422 18:07:41.735100  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99181fe4786b"
	I0422 18:07:41.757564  365956 logs.go:123] Gathering logs for kube-proxy [2670916c7c26] ...
	I0422 18:07:41.757590  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2670916c7c26"
	I0422 18:07:41.780898  365956 logs.go:123] Gathering logs for storage-provisioner [9dca99e0f1e4] ...
	I0422 18:07:41.780926  365956 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9dca99e0f1e4"
	I0422 18:07:41.803638  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:41.803662  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0422 18:07:41.803703  365956 out.go:239] X Problems detected in kubelet:
	W0422 18:07:41.803717  365956 out.go:239]   Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803726  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803738  365956 out.go:239]   Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803745  365956 out.go:239]   Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0422 18:07:41.803752  365956 out.go:239]   Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0422 18:07:41.803765  365956 out.go:304] Setting ErrFile to fd 2...
	I0422 18:07:41.803770  365956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 18:07:44.088928  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:46.093003  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:48.587921  378993 pod_ready.go:102] pod "metrics-server-569cc877fc-xsb7m" in "kube-system" namespace has status "Ready":"False"
	I0422 18:07:51.804813  365956 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0422 18:07:51.817041  365956 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0422 18:07:51.819583  365956 out.go:177] 
	W0422 18:07:51.821795  365956 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0422 18:07:51.821832  365956 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0422 18:07:51.821850  365956 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0422 18:07:51.821855  365956 out.go:239] * 
	W0422 18:07:51.822809  365956 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0422 18:07:51.826094  365956 out.go:177] 
	
	
	==> Docker <==
	Apr 22 18:07:29 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:29 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:29 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:29 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:29 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:30 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:30 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:30 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:30 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:30 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 22 18:07:41 old-k8s-version-986384 dockerd[966]: 2024/04/22 18:07:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e587f1e73637       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   4a61d20b8f47f       storage-provisioner
	38455e3500735       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   f017189416d4a       kubernetes-dashboard-cd95d586-sl4t5
	99181fe4786bb       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   cef1139b31c12       kube-proxy-cpcrp
	7282eebd499e5       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   f98c8fb92d737       busybox
	27648c4ab762d       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   506a05b99bb39       coredns-74ff55c5b-6dgwt
	9dca99e0f1e4b       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   4a61d20b8f47f       storage-provisioner
	5899397ea4a9c       05b738aa1bc63                                                                                         6 minutes ago       Running             etcd                      1                   32e7599a43927       etcd-old-k8s-version-986384
	8d4da1dcae530       e7605f88f17d6                                                                                         6 minutes ago       Running             kube-scheduler            1                   b200495b71d09       kube-scheduler-old-k8s-version-986384
	980fcb0f5f0ba       1df8a2b116bd1                                                                                         6 minutes ago       Running             kube-controller-manager   1                   8ba5024eaae36       kube-controller-manager-old-k8s-version-986384
	cfd818ddce4e8       2c08bbbc02d3a                                                                                         6 minutes ago       Running             kube-apiserver            1                   f4e927da32198       kube-apiserver-old-k8s-version-986384
	4776515699c91       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   33d9be7ba4c66       busybox
	f9d8bddf71973       db91994f4ee8f                                                                                         8 minutes ago       Exited              coredns                   0                   ebda59e8275f6       coredns-74ff55c5b-6dgwt
	2670916c7c265       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   2d83c57178e8d       kube-proxy-cpcrp
	36734b4048172       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   da8265715a13c       kube-apiserver-old-k8s-version-986384
	f029b0e7c02d4       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   ce13eb091d198       etcd-old-k8s-version-986384
	4ef6e29c4a78d       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   e7a9572189dcb       kube-scheduler-old-k8s-version-986384
	d0fea9e4ec1fa       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   2547de5a84b53       kube-controller-manager-old-k8s-version-986384
	
	
	==> coredns [27648c4ab762] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41020 - 41349 "HINFO IN 3701406051455781997.1799737944259204741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03139093s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0422 18:02:32.558855       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-22 18:02:02.558104642 +0000 UTC m=+0.026675373) (total time: 30.000645606s):
	Trace[2019727887]: [30.000645606s] [30.000645606s] END
	E0422 18:02:32.558890       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0422 18:02:32.560374       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-22 18:02:02.559499996 +0000 UTC m=+0.028070777) (total time: 30.000849046s):
	Trace[939984059]: [30.000849046s] [30.000849046s] END
	E0422 18:02:32.560391       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0422 18:02:32.560571       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-04-22 18:02:02.560312411 +0000 UTC m=+0.028883133) (total time: 30.00024639s):
	Trace[1474941318]: [30.00024639s] [30.00024639s] END
	E0422 18:02:32.560585       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [f9d8bddf7197] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	E0422 18:01:28.262016       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=199&timeout=9m4s&timeoutSeconds=544&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0422 18:01:28.262317       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=593&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	E0422 18:01:28.262478       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=591&timeout=6m54s&timeoutSeconds=414&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               old-k8s-version-986384
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-986384
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=066f6aefcc83a135104448c0f8191604ce1e099a
	                    minikube.k8s.io/name=old-k8s-version-986384
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_22T17_59_22_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Apr 2024 17:59:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-986384
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Apr 2024 18:07:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Apr 2024 18:02:50 +0000   Mon, 22 Apr 2024 17:59:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Apr 2024 18:02:50 +0000   Mon, 22 Apr 2024 17:59:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Apr 2024 18:02:50 +0000   Mon, 22 Apr 2024 17:59:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Apr 2024 18:02:50 +0000   Mon, 22 Apr 2024 17:59:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-986384
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022560Ki
	  pods:               110
	System Info:
	  Machine ID:                 4609a78f857440d5b58d5c23f259c275
	  System UUID:                309ee2d7-eff4-47c6-861b-329dfa90d712
	  Boot ID:                    10a06b61-013b-4e8e-82bb-900d7f84a0de
	  Kernel Version:             5.15.0-1058-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-74ff55c5b-6dgwt                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m17s
	  kube-system                 etcd-old-k8s-version-986384                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-apiserver-old-k8s-version-986384             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-controller-manager-old-k8s-version-986384    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 kube-proxy-cpcrp                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m17s
	  kube-system                 kube-scheduler-old-k8s-version-986384             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 metrics-server-9975d5f86-ts7pq                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-mf7fs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-sl4t5               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (4%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m47s (x6 over 8m47s)  kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x7 over 8m47s)  kubelet     Node old-k8s-version-986384 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x6 over 8m47s)  kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet     Node old-k8s-version-986384 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s                  kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             8m27s                  kubelet     Node old-k8s-version-986384 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m17s                  kubelet     Node old-k8s-version-986384 status is now: NodeReady
	  Normal  Starting                 8m14s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m3s                   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m3s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m2s (x8 over 6m3s)    kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x8 over 6m3s)    kubelet     Node old-k8s-version-986384 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m3s)    kubelet     Node old-k8s-version-986384 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m50s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001012] FS-Cache: N-cookie d=000000005ffb80a3{9p.inode} n=000000001cc2a9cb
	[  +0.001138] FS-Cache: N-key=[8] '856ced0000000000'
	[  +0.016545] FS-Cache: Duplicate cookie detected
	[  +0.000757] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001063] FS-Cache: O-cookie d=000000005ffb80a3{9p.inode} n=00000000f248b3ab
	[  +0.001115] FS-Cache: O-key=[8] '856ced0000000000'
	[  +0.000801] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.001025] FS-Cache: N-cookie d=000000005ffb80a3{9p.inode} n=0000000098a3ba1c
	[  +0.001125] FS-Cache: N-key=[8] '856ced0000000000'
	[  +3.054302] FS-Cache: Duplicate cookie detected
	[  +0.000736] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001119] FS-Cache: O-cookie d=000000005ffb80a3{9p.inode} n=0000000083092e75
	[  +0.001142] FS-Cache: O-key=[8] '846ced0000000000'
	[  +0.000805] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000995] FS-Cache: N-cookie d=000000005ffb80a3{9p.inode} n=000000008c5ba4fb
	[  +0.001203] FS-Cache: N-key=[8] '846ced0000000000'
	[  +0.358279] FS-Cache: Duplicate cookie detected
	[  +0.000759] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001050] FS-Cache: O-cookie d=000000005ffb80a3{9p.inode} n=000000006e167a51
	[  +0.001140] FS-Cache: O-key=[8] '8e6ced0000000000'
	[  +0.000779] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000976] FS-Cache: N-cookie d=000000005ffb80a3{9p.inode} n=00000000c933bfc4
	[  +0.001079] FS-Cache: N-key=[8] '8e6ced0000000000'
	[Apr22 17:45] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Apr22 17:56] hrtimer: interrupt took 38491516 ns
	
	
	==> etcd [5899397ea4a9] <==
	2024-04-22 18:03:50.533035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:00.542047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:10.532489 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:20.532522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:30.532624 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:40.532569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:04:50.532549 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:00.532602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:10.532662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:20.532669 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:30.532896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:40.534211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:05:50.532641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:00.532989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:10.533313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:20.532527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:30.532554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:40.532531 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:06:50.532592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:00.533129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:10.532523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:20.532839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:30.533054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:40.532822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:07:50.532764 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f029b0e7c02d] <==
	raft2024/04/22 17:59:07 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-04-22 17:59:07.961061 I | etcdserver: setting up the initial cluster version to 3.4
	2024-04-22 17:59:07.961838 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-04-22 17:59:07.962009 I | etcdserver/api: enabled capabilities for version 3.4
	2024-04-22 17:59:07.962306 I | etcdserver: published {Name:old-k8s-version-986384 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-04-22 17:59:07.962393 I | embed: ready to serve client requests
	2024-04-22 17:59:07.966956 I | embed: serving client requests on 127.0.0.1:2379
	2024-04-22 17:59:07.967246 I | embed: ready to serve client requests
	2024-04-22 17:59:07.970692 I | embed: serving client requests on 192.168.76.2:2379
	2024-04-22 17:59:18.905225 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 17:59:22.156511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 17:59:35.691132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 17:59:39.096111 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 17:59:49.095369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 17:59:59.095425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:09.095437 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:19.095433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:29.095345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:39.095423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:49.095219 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:00:59.095390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:01:09.095440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:01:19.095259 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-04-22 18:01:28.180306 N | pkg/osutil: received terminated signal, shutting down...
	2024-04-22 18:01:28.200950 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 18:07:53 up  1:50,  0 users,  load average: 3.24, 3.10, 3.51
	Linux old-k8s-version-986384 5.15.0-1058-aws #64~20.04.1-Ubuntu SMP Tue Apr 9 11:11:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [36734b404817] <==
	W0422 18:01:37.680089       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.685199       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.693166       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.718386       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.731440       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.735623       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.748150       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.765075       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.781787       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.825804       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.828836       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.860827       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.877336       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.878380       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.887355       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.908290       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.962107       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:37.970613       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.017939       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.051761       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.129355       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.162104       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.229383       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.248579       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0422 18:01:38.279742       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [cfd818ddce4e] <==
	I0422 18:04:28.640002       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:04:28.640012       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0422 18:05:02.590883       1 handler_proxy.go:102] no RequestInfo found in the context
	E0422 18:05:02.591007       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:05:02.591021       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0422 18:05:13.237840       1 client.go:360] parsed scheme: "passthrough"
	I0422 18:05:13.237884       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:05:13.237893       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0422 18:05:50.773633       1 client.go:360] parsed scheme: "passthrough"
	I0422 18:05:50.773681       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:05:50.773690       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0422 18:06:23.426710       1 client.go:360] parsed scheme: "passthrough"
	I0422 18:06:23.426757       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:06:23.426766       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0422 18:07:00.881672       1 handler_proxy.go:102] no RequestInfo found in the context
	E0422 18:07:00.881751       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0422 18:07:00.881760       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0422 18:07:02.658548       1 client.go:360] parsed scheme: "passthrough"
	I0422 18:07:02.658606       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:07:02.658615       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0422 18:07:45.528366       1 client.go:360] parsed scheme: "passthrough"
	I0422 18:07:45.528415       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0422 18:07:45.528425       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [980fcb0f5f0b] <==
	W0422 18:03:23.020021       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:03:49.546387       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:03:54.670494       1 request.go:655] Throttling request took 1.043958283s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0422 18:03:55.522173       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:04:20.049151       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:04:27.172708       1 request.go:655] Throttling request took 1.048058546s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W0422 18:04:28.025268       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:04:50.550909       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:04:59.675810       1 request.go:655] Throttling request took 1.048558518s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0422 18:05:00.527466       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:05:21.052877       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:05:32.178046       1 request.go:655] Throttling request took 1.048255986s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0422 18:05:33.031295       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:05:51.554847       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:06:04.686975       1 request.go:655] Throttling request took 1.048155545s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0422 18:06:05.538712       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:06:22.056707       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:06:37.189047       1 request.go:655] Throttling request took 1.048339994s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W0422 18:06:38.042215       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:06:52.559506       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:07:09.693014       1 request.go:655] Throttling request took 1.048161991s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0422 18:07:10.544622       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0422 18:07:23.061360       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0422 18:07:42.195266       1 request.go:655] Throttling request took 1.048366033s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0422 18:07:43.049658       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [d0fea9e4ec1f] <==
	I0422 17:59:36.682482       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lbrj7"
	I0422 17:59:36.682630       1 shared_informer.go:247] Caches are synced for service account 
	I0422 17:59:36.715766       1 shared_informer.go:247] Caches are synced for resource quota 
	I0422 17:59:36.721635       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6dgwt"
	I0422 17:59:36.732859       1 shared_informer.go:247] Caches are synced for namespace 
	I0422 17:59:36.741411       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-986384" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0422 17:59:36.770746       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0422 17:59:37.073097       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0422 17:59:37.107605       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0422 17:59:37.107627       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0422 17:59:39.624186       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0422 17:59:39.645162       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lbrj7"
	I0422 17:59:41.775250       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-6dgwt" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-6dgwt"
	I0422 17:59:41.775345       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-lbrj7" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-lbrj7"
	I0422 17:59:41.775364       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0422 17:59:41.775706       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0422 18:01:27.087676       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0422 18:01:28.205314       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-ts7pq"
	E0422 18:01:28.265401       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
	E0422 18:01:28.265914       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
	E0422 18:01:28.266364       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-9975d5f86.17c8ac1a8a56b2ad", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-9975d5f86", UID:"131d4dfc-3b80-4f0d-b3a5-754fe7eef31e", APIVersion:"apps/v1", ResourceVersion:"574", FieldPath:""}, Reason:"SuccessfulCreate", Message:"Created pod: metrics-server-9975d5f86-ts7pq", Source:v1.EventSource{Component:"replicaset-controller", Host:
""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc181c75e0c3102ad, ext:140830105453, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc181c75e0c3102ad, ext:140830105453, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.76.2:8443/api/v1/namespaces/kube-system/events": unexpected EOF'(may retry after sleeping)
	W0422 18:01:28.266568       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: failed to update metrics-server-z7bb5 EndpointSlice for Service kube-system/metrics-server: Put "https://192.168.76.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-z7bb5": unexpected EOF
	I0422 18:01:28.266774       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-z7bb5 EndpointSlice for Service kube-system/metrics-server: Put \"https://192.168.76.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-z7bb5\": unexpected EOF"
	E0422 18:01:28.266815       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server.17c8ac1a8e0911c3", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"metrics-server", UID:"01cdb85f-6d05-49a0-80ed-9804962c6e38", APIVersion:"v1", ResourceVersion:"591", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-z7bb5 EndpointSlice f
or Service kube-system/metrics-server: Put \"https://192.168.76.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-z7bb5\": unexpected EOF", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc181c75e0fe361c3, ext:140892126860, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc181c75e0fe361c3, ext:140892126860, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.76.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.76.2:8443: connect: connection refused'(may retry after sleeping)
	E0422 18:01:28.271274       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.76.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.76.2:8443: connect: connection refused
	
	
	==> kube-proxy [2670916c7c26] <==
	I0422 17:59:38.862752       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0422 17:59:38.865038       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0422 17:59:39.114815       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0422 17:59:39.114910       1 server_others.go:185] Using iptables Proxier.
	I0422 17:59:39.115145       1 server.go:650] Version: v1.20.0
	I0422 17:59:39.116118       1 config.go:315] Starting service config controller
	I0422 17:59:39.116128       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0422 17:59:39.116163       1 config.go:224] Starting endpoint slice config controller
	I0422 17:59:39.116183       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0422 17:59:39.216255       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0422 17:59:39.216339       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [99181fe4786b] <==
	I0422 18:02:03.962985       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0422 18:02:03.963077       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0422 18:02:03.982859       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0422 18:02:03.983130       1 server_others.go:185] Using iptables Proxier.
	I0422 18:02:03.983580       1 server.go:650] Version: v1.20.0
	I0422 18:02:03.984906       1 config.go:315] Starting service config controller
	I0422 18:02:03.985405       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0422 18:02:03.985442       1 config.go:224] Starting endpoint slice config controller
	I0422 18:02:03.985448       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0422 18:02:04.085536       1 shared_informer.go:247] Caches are synced for service config 
	I0422 18:02:04.086947       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [4ef6e29c4a78] <==
	I0422 17:59:18.026445       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0422 17:59:18.027039       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 17:59:18.027062       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 17:59:18.052890       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0422 17:59:18.053598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:59:18.053867       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:59:18.057966       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:59:18.061253       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 17:59:18.073032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 17:59:18.073145       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:59:18.073243       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 17:59:18.073501       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 17:59:18.073568       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:59:18.073653       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 17:59:18.073736       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:59:18.078193       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:59:18.928197       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0422 17:59:18.954183       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 17:59:18.955689       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 17:59:18.979673       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 17:59:18.979988       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 17:59:19.077173       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 17:59:19.183261       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 17:59:19.207727       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0422 17:59:21.727129       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [8d4da1dcae53] <==
	I0422 18:01:55.563396       1 serving.go:331] Generated self-signed cert in-memory
	W0422 18:01:59.666880       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0422 18:01:59.666915       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0422 18:01:59.666925       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0422 18:01:59.666932       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0422 18:01:59.915777       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0422 18:01:59.930167       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:01:59.930195       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0422 18:01:59.930635       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0422 18:02:00.021408       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0422 18:02:00.021523       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0422 18:02:00.021620       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0422 18:02:00.021685       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0422 18:02:00.021749       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0422 18:02:00.021812       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0422 18:02:00.021878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0422 18:02:00.021940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0422 18:02:00.022002       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0422 18:02:00.022064       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0422 18:02:00.022124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0422 18:02:00.033146       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0422 18:02:01.033559       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061117    1196 kuberuntime_manager.go:829] container &Container{Name:dashboard-metrics-scraper,Image:registry.k8s.io/echoserver:1.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:8000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-volume,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kubernetes-dashboard-token-chrvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:30,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:nil,Lifecycle:nil,Terminatio
nMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:*2001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844): ErrImagePull: rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/
	Apr 22 18:05:23 old-k8s-version-986384 kubelet[1196]: E0422 18:05:23.061150    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Apr 22 18:05:28 old-k8s-version-986384 kubelet[1196]: E0422 18:05:28.618921    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:05:34 old-k8s-version-986384 kubelet[1196]: E0422 18:05:34.621754    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:05:39 old-k8s-version-986384 kubelet[1196]: E0422 18:05:39.618114    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:05:45 old-k8s-version-986384 kubelet[1196]: E0422 18:05:45.663272    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:05:51 old-k8s-version-986384 kubelet[1196]: E0422 18:05:51.618807    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:05:58 old-k8s-version-986384 kubelet[1196]: E0422 18:05:58.648913    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:04 old-k8s-version-986384 kubelet[1196]: E0422 18:06:04.618373    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:11 old-k8s-version-986384 kubelet[1196]: E0422 18:06:11.618095    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:15 old-k8s-version-986384 kubelet[1196]: E0422 18:06:15.618294    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:25 old-k8s-version-986384 kubelet[1196]: E0422 18:06:25.619230    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:30 old-k8s-version-986384 kubelet[1196]: E0422 18:06:30.618689    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:39 old-k8s-version-986384 kubelet[1196]: E0422 18:06:39.618588    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:43 old-k8s-version-986384 kubelet[1196]: E0422 18:06:43.618416    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:54 old-k8s-version-986384 kubelet[1196]: E0422 18:06:54.627082    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:06:55 old-k8s-version-986384 kubelet[1196]: E0422 18:06:55.618270    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:06 old-k8s-version-986384 kubelet[1196]: E0422 18:07:06.619950    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:07 old-k8s-version-986384 kubelet[1196]: E0422 18:07:07.618622    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.622590    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:18 old-k8s-version-986384 kubelet[1196]: E0422 18:07:18.645475    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:29 old-k8s-version-986384 kubelet[1196]: E0422 18:07:29.619046    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:33 old-k8s-version-986384 kubelet[1196]: E0422 18:07:33.618306    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:44 old-k8s-version-986384 kubelet[1196]: E0422 18:07:44.618324    1196 pod_workers.go:191] Error syncing pod 4faf1d5a-cd2f-4a05-b4f4-a985559dd072 ("metrics-server-9975d5f86-ts7pq_kube-system(4faf1d5a-cd2f-4a05-b4f4-a985559dd072)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Apr 22 18:07:46 old-k8s-version-986384 kubelet[1196]: E0422 18:07:46.618691    1196 pod_workers.go:191] Error syncing pod e523bdd8-d720-4b25-957f-a4eb6c91e844 ("dashboard-metrics-scraper-8d5bb5db8-mf7fs_kubernetes-dashboard(e523bdd8-d720-4b25-957f-a4eb6c91e844)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [38455e350073] <==
	2024/04/22 18:02:24 Using namespace: kubernetes-dashboard
	2024/04/22 18:02:24 Using in-cluster config to connect to apiserver
	2024/04/22 18:02:24 Using secret token for csrf signing
	2024/04/22 18:02:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/04/22 18:02:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/04/22 18:02:24 Successful initial request to the apiserver, version: v1.20.0
	2024/04/22 18:02:24 Generating JWE encryption key
	2024/04/22 18:02:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/04/22 18:02:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/04/22 18:02:24 Initializing JWE encryption key from synchronized object
	2024/04/22 18:02:24 Creating in-cluster Sidecar client
	2024/04/22 18:02:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:02:24 Serving insecurely on HTTP port: 9090
	2024/04/22 18:02:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:03:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:03:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:04:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:04:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:05:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:05:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:06:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:06:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:07:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/04/22 18:02:24 Starting overwatch
	
	
	==> storage-provisioner [2e587f1e7363] <==
	I0422 18:02:44.764888       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0422 18:02:44.797121       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0422 18:02:44.797579       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0422 18:03:02.261277       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0422 18:03:02.261737       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-986384_b443db82-0015-4a43-8f89-191541cb9af0!
	I0422 18:03:02.261506       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"107cbe39-141d-47fa-8a62-83794bdd28e6", APIVersion:"v1", ResourceVersion:"822", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-986384_b443db82-0015-4a43-8f89-191541cb9af0 became leader
	I0422 18:03:02.362337       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-986384_b443db82-0015-4a43-8f89-191541cb9af0!
	
	
	==> storage-provisioner [9dca99e0f1e4] <==
	I0422 18:02:02.110504       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0422 18:02:32.118934       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986384 -n old-k8s-version-986384
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-986384 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-ts7pq dashboard-metrics-scraper-8d5bb5db8-mf7fs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-986384 describe pod metrics-server-9975d5f86-ts7pq dashboard-metrics-scraper-8d5bb5db8-mf7fs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-986384 describe pod metrics-server-9975d5f86-ts7pq dashboard-metrics-scraper-8d5bb5db8-mf7fs: exit status 1 (82.380471ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-ts7pq" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-mf7fs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-986384 describe pod metrics-server-9975d5f86-ts7pq dashboard-metrics-scraper-8d5bb5db8-mf7fs: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.34s)

                                                
                                    

Test pass (316/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.3
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.3
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.0/json-events 8.8
13 TestDownloadOnly/v1.30.0/preload-exists 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.09
18 TestDownloadOnly/v1.30.0/DeleteAll 0.19
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.57
22 TestOffline 97.83
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 143.6
29 TestAddons/parallel/Registry 22.08
31 TestAddons/parallel/InspektorGadget 10.74
32 TestAddons/parallel/MetricsServer 5.71
35 TestAddons/parallel/CSI 50.07
36 TestAddons/parallel/Headlamp 11
37 TestAddons/parallel/CloudSpanner 6.54
38 TestAddons/parallel/LocalPath 52.86
39 TestAddons/parallel/NvidiaDevicePlugin 5.58
40 TestAddons/parallel/Yakd 6
43 TestAddons/serial/GCPAuth/Namespaces 0.17
44 TestAddons/StoppedEnableDisable 11.31
45 TestCertOptions 42.95
46 TestCertExpiration 248.01
47 TestDockerFlags 46.45
48 TestForceSystemdFlag 46.34
49 TestForceSystemdEnv 40.51
55 TestErrorSpam/setup 30.95
56 TestErrorSpam/start 0.76
57 TestErrorSpam/status 0.98
58 TestErrorSpam/pause 1.31
59 TestErrorSpam/unpause 1.34
60 TestErrorSpam/stop 2.06
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 56.66
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 34.64
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.61
72 TestFunctional/serial/CacheCmd/cache/add_local 1.11
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
74 TestFunctional/serial/CacheCmd/cache/list 0.07
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.47
77 TestFunctional/serial/CacheCmd/cache/delete 0.15
78 TestFunctional/serial/MinikubeKubectlCmd 0.17
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
80 TestFunctional/serial/ExtraConfig 43.65
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.19
83 TestFunctional/serial/LogsFileCmd 1.27
84 TestFunctional/serial/InvalidService 5.06
86 TestFunctional/parallel/ConfigCmd 0.59
87 TestFunctional/parallel/DashboardCmd 11.59
88 TestFunctional/parallel/DryRun 0.77
89 TestFunctional/parallel/InternationalLanguage 0.21
90 TestFunctional/parallel/StatusCmd 1.3
94 TestFunctional/parallel/ServiceCmdConnect 11.66
95 TestFunctional/parallel/AddonsCmd 0.26
96 TestFunctional/parallel/PersistentVolumeClaim 28.44
98 TestFunctional/parallel/SSHCmd 0.66
99 TestFunctional/parallel/CpCmd 2.45
101 TestFunctional/parallel/FileSync 0.34
102 TestFunctional/parallel/CertSync 2.17
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
110 TestFunctional/parallel/License 0.28
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.39
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.31
123 TestFunctional/parallel/ServiceCmd/List 0.6
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
126 TestFunctional/parallel/ProfileCmd/profile_list 0.49
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
129 TestFunctional/parallel/MountCmd/any-port 8.64
130 TestFunctional/parallel/ServiceCmd/Format 0.54
131 TestFunctional/parallel/ServiceCmd/URL 0.39
132 TestFunctional/parallel/MountCmd/specific-port 2.09
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.69
134 TestFunctional/parallel/Version/short 0.1
135 TestFunctional/parallel/Version/components 0.96
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
140 TestFunctional/parallel/ImageCommands/ImageBuild 2.8
141 TestFunctional/parallel/ImageCommands/Setup 1.68
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
145 TestFunctional/parallel/DockerEnv/bash 1.3
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.36
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.8
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.87
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.26
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
153 TestFunctional/delete_addon-resizer_images 0.09
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 128.87
160 TestMultiControlPlane/serial/DeployApp 57.74
161 TestMultiControlPlane/serial/PingHostFromPods 1.76
162 TestMultiControlPlane/serial/AddWorkerNode 25.58
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
165 TestMultiControlPlane/serial/CopyFile 19.46
166 TestMultiControlPlane/serial/StopSecondaryNode 11.68
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.58
168 TestMultiControlPlane/serial/RestartSecondaryNode 40
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.99
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 241.43
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.67
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
173 TestMultiControlPlane/serial/StopCluster 32.95
174 TestMultiControlPlane/serial/RestartCluster 93.45
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
176 TestMultiControlPlane/serial/AddSecondaryNode 47.97
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
180 TestImageBuild/serial/Setup 31.67
181 TestImageBuild/serial/NormalBuild 2.12
182 TestImageBuild/serial/BuildWithBuildArg 0.94
183 TestImageBuild/serial/BuildWithDockerIgnore 1.02
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.75
188 TestJSONOutput/start/Command 48.69
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.61
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.56
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.71
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.23
213 TestKicCustomNetwork/create_custom_network 37.4
214 TestKicCustomNetwork/use_default_bridge_network 35.15
215 TestKicExistingNetwork 34.99
216 TestKicCustomSubnet 37.62
217 TestKicStaticIP 37.01
218 TestMainNoArgs 0.07
219 TestMinikubeProfile 68.06
222 TestMountStart/serial/StartWithMountFirst 7.86
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 8.18
225 TestMountStart/serial/VerifyMountSecond 0.26
226 TestMountStart/serial/DeleteFirst 1.46
227 TestMountStart/serial/VerifyMountPostDelete 0.25
228 TestMountStart/serial/Stop 1.21
229 TestMountStart/serial/RestartStopped 8.68
230 TestMountStart/serial/VerifyMountPostStop 0.25
233 TestMultiNode/serial/FreshStart2Nodes 84.33
234 TestMultiNode/serial/DeployApp2Nodes 55.61
235 TestMultiNode/serial/PingHostFrom2Pods 1.07
236 TestMultiNode/serial/AddNode 18.76
237 TestMultiNode/serial/MultiNodeLabels 0.11
238 TestMultiNode/serial/ProfileList 0.33
239 TestMultiNode/serial/CopyFile 10.23
240 TestMultiNode/serial/StopNode 2.24
241 TestMultiNode/serial/StartAfterStop 10.87
242 TestMultiNode/serial/RestartKeepsNodes 88.05
243 TestMultiNode/serial/DeleteNode 5.44
244 TestMultiNode/serial/StopMultiNode 21.71
245 TestMultiNode/serial/RestartMultiNode 57.3
246 TestMultiNode/serial/ValidateNameConflict 37.6
251 TestPreload 140.42
253 TestScheduledStopUnix 107.43
254 TestSkaffold 117.22
256 TestInsufficientStorage 12.02
257 TestRunningBinaryUpgrade 95.91
259 TestKubernetesUpgrade 370.33
260 TestMissingContainerUpgrade 115.12
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
263 TestNoKubernetes/serial/StartWithK8s 46.01
264 TestNoKubernetes/serial/StartWithStopK8s 8.18
265 TestNoKubernetes/serial/Start 10.55
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
267 TestNoKubernetes/serial/ProfileList 1.05
268 TestNoKubernetes/serial/Stop 1.25
269 TestNoKubernetes/serial/StartNoArgs 7.39
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
282 TestStoppedBinaryUpgrade/Setup 1.16
283 TestStoppedBinaryUpgrade/Upgrade 113.35
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
293 TestPause/serial/Start 88.46
294 TestPause/serial/SecondStartNoReconfiguration 31.21
295 TestPause/serial/Pause 0.74
296 TestPause/serial/VerifyStatus 0.39
297 TestPause/serial/Unpause 0.51
298 TestPause/serial/PauseAgain 1.04
299 TestPause/serial/DeletePaused 2.34
300 TestPause/serial/VerifyDeletedResources 16.02
301 TestNetworkPlugins/group/auto/Start 91.71
302 TestNetworkPlugins/group/kindnet/Start 68.99
303 TestNetworkPlugins/group/auto/KubeletFlags 0.34
304 TestNetworkPlugins/group/auto/NetCatPod 11.39
305 TestNetworkPlugins/group/auto/DNS 0.28
306 TestNetworkPlugins/group/auto/Localhost 0.2
307 TestNetworkPlugins/group/auto/HairPin 0.23
308 TestNetworkPlugins/group/calico/Start 91.38
309 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
311 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
312 TestNetworkPlugins/group/kindnet/DNS 0.2
313 TestNetworkPlugins/group/kindnet/Localhost 0.17
314 TestNetworkPlugins/group/kindnet/HairPin 0.2
315 TestNetworkPlugins/group/custom-flannel/Start 69.45
316 TestNetworkPlugins/group/calico/ControllerPod 6.01
317 TestNetworkPlugins/group/calico/KubeletFlags 0.45
318 TestNetworkPlugins/group/calico/NetCatPod 12.36
319 TestNetworkPlugins/group/calico/DNS 0.23
320 TestNetworkPlugins/group/calico/Localhost 0.16
321 TestNetworkPlugins/group/calico/HairPin 0.19
322 TestNetworkPlugins/group/false/Start 57.24
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.36
325 TestNetworkPlugins/group/custom-flannel/DNS 0.34
326 TestNetworkPlugins/group/custom-flannel/Localhost 0.32
327 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
328 TestNetworkPlugins/group/enable-default-cni/Start 88.48
329 TestNetworkPlugins/group/false/KubeletFlags 0.36
330 TestNetworkPlugins/group/false/NetCatPod 10.28
331 TestNetworkPlugins/group/false/DNS 0.32
332 TestNetworkPlugins/group/false/Localhost 0.3
333 TestNetworkPlugins/group/false/HairPin 0.31
334 TestNetworkPlugins/group/flannel/Start 65.17
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.3
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
342 TestNetworkPlugins/group/flannel/NetCatPod 11.37
343 TestNetworkPlugins/group/bridge/Start 93.48
344 TestNetworkPlugins/group/flannel/DNS 0.19
345 TestNetworkPlugins/group/flannel/Localhost 0.15
346 TestNetworkPlugins/group/flannel/HairPin 0.16
347 TestNetworkPlugins/group/kubenet/Start 93.37
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
349 TestNetworkPlugins/group/bridge/NetCatPod 10.28
350 TestNetworkPlugins/group/bridge/DNS 0.21
351 TestNetworkPlugins/group/bridge/Localhost 0.16
352 TestNetworkPlugins/group/bridge/HairPin 0.17
354 TestStartStop/group/old-k8s-version/serial/FirstStart 162.87
355 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
356 TestNetworkPlugins/group/kubenet/NetCatPod 14.35
357 TestNetworkPlugins/group/kubenet/DNS 0.19
358 TestNetworkPlugins/group/kubenet/Localhost 0.17
359 TestNetworkPlugins/group/kubenet/HairPin 0.21
361 TestStartStop/group/embed-certs/serial/FirstStart 80.82
362 TestStartStop/group/embed-certs/serial/DeployApp 8.4
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
364 TestStartStop/group/embed-certs/serial/Stop 11.06
365 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
366 TestStartStop/group/embed-certs/serial/SecondStart 266.56
367 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
368 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.66
369 TestStartStop/group/old-k8s-version/serial/Stop 11.15
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
372 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
375 TestStartStop/group/embed-certs/serial/Pause 3
377 TestStartStop/group/no-preload/serial/FirstStart 55.9
378 TestStartStop/group/no-preload/serial/DeployApp 10.41
379 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
380 TestStartStop/group/no-preload/serial/Stop 11.02
381 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
382 TestStartStop/group/no-preload/serial/SecondStart 265.81
383 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.02
384 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
385 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
386 TestStartStop/group/old-k8s-version/serial/Pause 2.77
388 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.42
389 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
390 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.05
391 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.9
392 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
393 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.43
394 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
396 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
397 TestStartStop/group/no-preload/serial/Pause 2.86
399 TestStartStop/group/newest-cni/serial/FirstStart 47.12
400 TestStartStop/group/newest-cni/serial/DeployApp 0
401 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
402 TestStartStop/group/newest-cni/serial/Stop 11
403 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
404 TestStartStop/group/newest-cni/serial/SecondStart 18.01
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
408 TestStartStop/group/newest-cni/serial/Pause 3.01
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.76
x
+
TestDownloadOnly/v1.20.0/json-events (12.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-885518 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-885518 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.302192079s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-885518
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-885518: exit status 85 (298.699175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-885518 | jenkins | v1.33.0 | 22 Apr 24 16:56 UTC |          |
	|         | -p download-only-885518        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:56:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:56:57.741611    7733 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:56:57.741771    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:56:57.741781    7733 out.go:304] Setting ErrFile to fd 2...
	I0422 16:56:57.741786    7733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:56:57.742014    7733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	W0422 16:56:57.742154    7733 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18706-2371/.minikube/config/config.json: open /home/jenkins/minikube-integration/18706-2371/.minikube/config/config.json: no such file or directory
	I0422 16:56:57.742596    7733 out.go:298] Setting JSON to true
	I0422 16:56:57.743400    7733 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2365,"bootTime":1713802653,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 16:56:57.743472    7733 start.go:139] virtualization:  
	I0422 16:56:57.747733    7733 out.go:97] [download-only-885518] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 16:56:57.750193    7733 out.go:169] MINIKUBE_LOCATION=18706
	W0422 16:56:57.747914    7733 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball: no such file or directory
	I0422 16:56:57.747955    7733 notify.go:220] Checking for updates...
	I0422 16:56:57.755007    7733 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:56:57.757059    7733 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 16:56:57.758975    7733 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 16:56:57.761322    7733 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0422 16:56:57.765055    7733 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 16:56:57.765371    7733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:56:57.783133    7733 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 16:56:57.783231    7733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:56:58.096367    7733 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 16:56:58.085573543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:56:58.096505    7733 docker.go:295] overlay module found
	I0422 16:56:58.098903    7733 out.go:97] Using the docker driver based on user configuration
	I0422 16:56:58.098934    7733 start.go:297] selected driver: docker
	I0422 16:56:58.098942    7733 start.go:901] validating driver "docker" against <nil>
	I0422 16:56:58.099082    7733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:56:58.148444    7733 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-04-22 16:56:58.138723494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:56:58.148636    7733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:56:58.149057    7733 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0422 16:56:58.149230    7733 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 16:56:58.151667    7733 out.go:169] Using Docker driver with root privileges
	I0422 16:56:58.153436    7733 cni.go:84] Creating CNI manager for ""
	I0422 16:56:58.153467    7733 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0422 16:56:58.153543    7733 start.go:340] cluster config:
	{Name:download-only-885518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-885518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:56:58.155916    7733 out.go:97] Starting "download-only-885518" primary control-plane node in "download-only-885518" cluster
	I0422 16:56:58.155942    7733 cache.go:121] Beginning downloading kic base image for docker with docker
	I0422 16:56:58.157704    7733 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0422 16:56:58.157733    7733 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 16:56:58.157888    7733 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0422 16:56:58.170868    7733 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0422 16:56:58.171028    7733 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0422 16:56:58.171147    7733 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0422 16:56:58.227346    7733 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0422 16:56:58.227378    7733 cache.go:56] Caching tarball of preloaded images
	I0422 16:56:58.227547    7733 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0422 16:56:58.230064    7733 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0422 16:56:58.230095    7733 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0422 16:56:58.355200    7733 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0422 16:57:04.629896    7733 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	
	
	* The control-plane node download-only-885518 host does not exist
	  To start a cluster, run: "minikube start -p download-only-885518"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-885518
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (8.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-303600 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-303600 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.803645024s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (8.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-303600
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-303600: exit status 85 (91.004396ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-885518 | jenkins | v1.33.0 | 22 Apr 24 16:56 UTC |                     |
	|         | -p download-only-885518        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| delete  | -p download-only-885518        | download-only-885518 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC | 22 Apr 24 16:57 UTC |
	| start   | -o=json --download-only        | download-only-303600 | jenkins | v1.33.0 | 22 Apr 24 16:57 UTC |                     |
	|         | -p download-only-303600        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/22 16:57:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0422 16:57:10.851883    7902 out.go:291] Setting OutFile to fd 1 ...
	I0422 16:57:10.852055    7902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:10.852064    7902 out.go:304] Setting ErrFile to fd 2...
	I0422 16:57:10.852069    7902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 16:57:10.852328    7902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 16:57:10.852736    7902 out.go:298] Setting JSON to true
	I0422 16:57:10.853469    7902 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2378,"bootTime":1713802653,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 16:57:10.853537    7902 start.go:139] virtualization:  
	I0422 16:57:10.881072    7902 out.go:97] [download-only-303600] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 16:57:10.881312    7902 notify.go:220] Checking for updates...
	I0422 16:57:10.911008    7902 out.go:169] MINIKUBE_LOCATION=18706
	I0422 16:57:10.947943    7902 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 16:57:10.975710    7902 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 16:57:11.007692    7902 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 16:57:11.038217    7902 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0422 16:57:11.088175    7902 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0422 16:57:11.088511    7902 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 16:57:11.107733    7902 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 16:57:11.107838    7902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:57:11.191009    7902 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-22 16:57:11.180706093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:57:11.191123    7902 docker.go:295] overlay module found
	I0422 16:57:11.204268    7902 out.go:97] Using the docker driver based on user configuration
	I0422 16:57:11.204315    7902 start.go:297] selected driver: docker
	I0422 16:57:11.204322    7902 start.go:901] validating driver "docker" against <nil>
	I0422 16:57:11.204458    7902 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 16:57:11.266924    7902 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-04-22 16:57:11.258258993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 16:57:11.267091    7902 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0422 16:57:11.267386    7902 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0422 16:57:11.267550    7902 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0422 16:57:11.270352    7902 out.go:169] Using Docker driver with root privileges
	I0422 16:57:11.272395    7902 cni.go:84] Creating CNI manager for ""
	I0422 16:57:11.272430    7902 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0422 16:57:11.272450    7902 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0422 16:57:11.272550    7902 start.go:340] cluster config:
	{Name:download-only-303600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-303600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 16:57:11.274821    7902 out.go:97] Starting "download-only-303600" primary control-plane node in "download-only-303600" cluster
	I0422 16:57:11.274859    7902 cache.go:121] Beginning downloading kic base image for docker with docker
	I0422 16:57:11.277038    7902 out.go:97] Pulling base image v0.0.43-1713736339-18706 ...
	I0422 16:57:11.277075    7902 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 16:57:11.277258    7902 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local docker daemon
	I0422 16:57:11.291558    7902 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e to local cache
	I0422 16:57:11.291681    7902 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory
	I0422 16:57:11.291705    7902 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e in local cache directory, skipping pull
	I0422 16:57:11.291710    7902 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e exists in cache, skipping pull
	I0422 16:57:11.291721    7902 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e as a tarball
	I0422 16:57:11.342437    7902 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	I0422 16:57:11.342465    7902 cache.go:56] Caching tarball of preloaded images
	I0422 16:57:11.342627    7902 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0422 16:57:11.345144    7902 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0422 16:57:11.345174    7902 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4 ...
	I0422 16:57:11.454178    7902 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4?checksum=md5:677034533668c42fec962cc52f9b3c42 -> /home/jenkins/minikube-integration/18706-2371/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-303600 host does not exist
	  To start a cluster, run: "minikube start -p download-only-303600"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-303600
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-196031 --alsologtostderr --binary-mirror http://127.0.0.1:38499 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-196031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-196031
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (97.83s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-262638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-262638 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m35.582135709s)
helpers_test.go:175: Cleaning up "offline-docker-262638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-262638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-262638: (2.250754442s)
--- PASS: TestOffline (97.83s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-613799
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-613799: exit status 85 (89.890712ms)

                                                
                                                
-- stdout --
	* Profile "addons-613799" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-613799"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-613799
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-613799: exit status 85 (72.590804ms)

                                                
                                                
-- stdout --
	* Profile "addons-613799" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-613799"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (143.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-613799 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-613799 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m23.594966941s)
--- PASS: TestAddons/Setup (143.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 44.068933ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7sjdx" [fe3a22e5-a967-42c5-a577-12e41a8d87a3] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004526305s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vvsgz" [522abbcd-88fe-48d2-801e-abcd5103e5f8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004946907s
addons_test.go:340: (dbg) Run:  kubectl --context addons-613799 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-613799 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-613799 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.998554584s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 ip
2024/04/22 17:00:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tmgbd" [3258b925-a7a6-467e-9bc5-830098a847d4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007553456s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-613799
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-613799: (5.73296161s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.257861ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-6pqqb" [8528911a-6f9f-4adb-9fb9-16526fdd739f] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00480369s
addons_test.go:415: (dbg) Run:  kubectl --context addons-613799 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.07s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 58.837502ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-613799 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-613799 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [58498588-d43e-4312-a0d5-2b950a092b5c] Pending
helpers_test.go:344: "task-pv-pod" [58498588-d43e-4312-a0d5-2b950a092b5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [58498588-d43e-4312-a0d5-2b950a092b5c] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003550842s
addons_test.go:584: (dbg) Run:  kubectl --context addons-613799 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-613799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-613799 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-613799 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-613799 delete pod task-pv-pod: (1.475333287s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-613799 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-613799 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-613799 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [398baaf9-a0aa-4b07-80db-bf4c781f2889] Pending
helpers_test.go:344: "task-pv-pod-restore" [398baaf9-a0aa-4b07-80db-bf4c781f2889] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [398baaf9-a0aa-4b07-80db-bf4c781f2889] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003664923s
addons_test.go:626: (dbg) Run:  kubectl --context addons-613799 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-613799 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-613799 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-613799 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.986646984s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.07s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-613799 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-8zwkj" [69f32b44-80c8-4f82-b71a-ed89d0af78bb] Pending
helpers_test.go:344: "headlamp-7559bf459f-8zwkj" [69f32b44-80c8-4f82-b71a-ed89d0af78bb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-8zwkj" [69f32b44-80c8-4f82-b71a-ed89d0af78bb] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00369086s
--- PASS: TestAddons/parallel/Headlamp (11.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-8677549d7-d8s49" [ce0c8a99-7fa2-4bcc-a05e-5a770d851ca6] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00314101s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-613799
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-613799 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-613799 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [72ed48d9-1ec0-43d7-8342-55768b004ee0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [72ed48d9-1ec0-43d7-8342-55768b004ee0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [72ed48d9-1ec0-43d7-8342-55768b004ee0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.009346374s
addons_test.go:891: (dbg) Run:  kubectl --context addons-613799 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 ssh "cat /opt/local-path-provisioner/pvc-ca4b7ddd-c8b8-43eb-829c-c7146edd24e8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-613799 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-613799 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-613799 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-613799 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.485085453s)
--- PASS: TestAddons/parallel/LocalPath (52.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rrg6b" [aaec657e-0a16-47e1-b7fa-19d45d1b473a] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005348893s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-613799
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-sc655" [730db8b9-00e8-41f3-9f24-832a620d0c62] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0040795s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-613799 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-613799 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-613799
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-613799: (11.033199094s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-613799
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-613799
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-613799
--- PASS: TestAddons/StoppedEnableDisable (11.31s)

                                                
                                    
x
+
TestCertOptions (42.95s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-452261 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-452261 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (40.235701322s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-452261 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-452261 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-452261 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-452261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-452261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-452261: (2.066802637s)
--- PASS: TestCertOptions (42.95s)

                                                
                                    
x
+
TestCertExpiration (248.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-090097 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0422 17:40:26.633419    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-090097 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (42.639153968s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-090097 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-090097 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.08687896s)
helpers_test.go:175: Cleaning up "cert-expiration-090097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-090097
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-090097: (2.278944854s)
--- PASS: TestCertExpiration (248.01s)

                                                
                                    
x
+
TestDockerFlags (46.45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-464430 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-464430 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.34836517s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-464430 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-464430 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-464430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-464430
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-464430: (2.328742658s)
--- PASS: TestDockerFlags (46.45s)

                                                
                                    
x
+
TestForceSystemdFlag (46.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-778682 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0422 17:39:45.108860    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-778682 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.679510861s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-778682 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-778682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-778682
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-778682: (2.239126837s)
--- PASS: TestForceSystemdFlag (46.34s)

                                                
                                    
x
+
TestForceSystemdEnv (40.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-272538 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-272538 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.91639723s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-272538 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-272538" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-272538
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-272538: (2.143665939s)
--- PASS: TestForceSystemdEnv (40.51s)

                                                
                                    
x
+
TestErrorSpam/setup (30.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-997727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-997727 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-997727 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-997727 --driver=docker  --container-runtime=docker: (30.948089757s)
--- PASS: TestErrorSpam/setup (30.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (2.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 stop: (1.83941993s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-997727 --log_dir /tmp/nospam-997727 stop
--- PASS: TestErrorSpam/stop (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18706-2371/.minikube/files/etc/test/nested/copy/7728/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-892312 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (56.659942806s)
--- PASS: TestFunctional/serial/StartWithProxy (56.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-892312 --alsologtostderr -v=8: (34.637423161s)
functional_test.go:659: soft start took 34.639212448s for "functional-892312" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-892312 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-892312 /tmp/TestFunctionalserialCacheCmdcacheadd_local2516931203/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache add minikube-local-cache-test:functional-892312
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache delete minikube-local-cache-test:functional-892312
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-892312
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.766672ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 kubectl -- --context functional-892312 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-892312 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0422 17:04:45.108692    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.116877    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.127195    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.147996    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.189249    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.269724    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.430158    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:45.750787    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:46.391661    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:47.671838    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:50.232026    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:04:55.352709    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:05:05.593483    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-892312 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.653667979s)
functional_test.go:757: restart took 43.653770647s for "functional-892312" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-892312 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 logs: (1.190769103s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 logs --file /tmp/TestFunctionalserialLogsFileCmd1432966983/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 logs --file /tmp/TestFunctionalserialLogsFileCmd1432966983/001/logs.txt: (1.264392507s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-892312 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-892312
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-892312: exit status 115 (664.254314ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32043 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-892312 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-892312 delete -f testdata/invalidsvc.yaml: (1.110832568s)
--- PASS: TestFunctional/serial/InvalidService (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 config get cpus: exit status 14 (96.859032ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 config get cpus: exit status 14 (101.601286ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-892312 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-892312 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 44301: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-892312 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (253.018574ms)

                                                
                                                
-- stdout --
	* [functional-892312] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:05:59.772055   43924 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:05:59.772248   43924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:05:59.772301   43924 out.go:304] Setting ErrFile to fd 2...
	I0422 17:05:59.772321   43924 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:05:59.772596   43924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:05:59.773802   43924 out.go:298] Setting JSON to false
	I0422 17:05:59.775119   43924 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2907,"bootTime":1713802653,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 17:05:59.775231   43924 start.go:139] virtualization:  
	I0422 17:05:59.780325   43924 out.go:177] * [functional-892312] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	I0422 17:05:59.784086   43924 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:05:59.784133   43924 notify.go:220] Checking for updates...
	I0422 17:05:59.788551   43924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:05:59.792168   43924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 17:05:59.794822   43924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 17:05:59.798414   43924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0422 17:05:59.804929   43924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:05:59.808285   43924 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:05:59.808909   43924 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:05:59.835108   43924 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 17:05:59.835233   43924 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 17:05:59.930311   43924 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-22 17:05:59.920066042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 17:05:59.930427   43924 docker.go:295] overlay module found
	I0422 17:05:59.935143   43924 out.go:177] * Using the docker driver based on existing profile
	I0422 17:05:59.938014   43924 start.go:297] selected driver: docker
	I0422 17:05:59.938040   43924 start.go:901] validating driver "docker" against &{Name:functional-892312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-892312 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:05:59.938153   43924 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:05:59.941013   43924 out.go:177] 
	W0422 17:05:59.943239   43924 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0422 17:05:59.945475   43924 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-892312 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-892312 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (211.412971ms)

                                                
                                                
-- stdout --
	* [functional-892312] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:05:59.556186   43881 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:05:59.556403   43881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:05:59.556430   43881 out.go:304] Setting ErrFile to fd 2...
	I0422 17:05:59.556447   43881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:05:59.556865   43881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:05:59.557314   43881 out.go:298] Setting JSON to false
	I0422 17:05:59.558300   43881 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2907,"bootTime":1713802653,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1058-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0422 17:05:59.558396   43881 start.go:139] virtualization:  
	I0422 17:05:59.561527   43881 out.go:177] * [functional-892312] minikube v1.33.0 sur Ubuntu 20.04 (arm64)
	I0422 17:05:59.564466   43881 out.go:177]   - MINIKUBE_LOCATION=18706
	I0422 17:05:59.564591   43881 notify.go:220] Checking for updates...
	I0422 17:05:59.570052   43881 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0422 17:05:59.572556   43881 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	I0422 17:05:59.575243   43881 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	I0422 17:05:59.577535   43881 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0422 17:05:59.580055   43881 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0422 17:05:59.583009   43881 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:05:59.583551   43881 driver.go:392] Setting default libvirt URI to qemu:///system
	I0422 17:05:59.602689   43881 docker.go:122] docker version: linux-26.0.2:Docker Engine - Community
	I0422 17:05:59.602806   43881 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 17:05:59.677257   43881 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-04-22 17:05:59.66566274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 17:05:59.677389   43881 docker.go:295] overlay module found
	I0422 17:05:59.682047   43881 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0422 17:05:59.684324   43881 start.go:297] selected driver: docker
	I0422 17:05:59.684343   43881 start.go:901] validating driver "docker" against &{Name:functional-892312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-892312 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0422 17:05:59.684469   43881 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0422 17:05:59.687513   43881 out.go:177] 
	W0422 17:05:59.689785   43881 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0422 17:05:59.692066   43881 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-892312 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-892312 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-lhl9z" [3954a09c-18aa-4ece-8f38-e4b07c1da477] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-lhl9z" [3954a09c-18aa-4ece-8f38-e4b07c1da477] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004209093s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32510
functional_test.go:1671: http://192.168.49.2:32510: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-lhl9z

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32510
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4f46933d-d4b6-4b6a-8dbc-54cc02d4b850] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004556949s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-892312 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-892312 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-892312 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892312 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e3b02d83-f6a0-4f25-9003-19457d2a3168] Pending
helpers_test.go:344: "sp-pod" [e3b02d83-f6a0-4f25-9003-19457d2a3168] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e3b02d83-f6a0-4f25-9003-19457d2a3168] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003729341s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-892312 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-892312 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-892312 delete -f testdata/storage-provisioner/pod.yaml: (1.353621529s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-892312 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f90a9710-8202-4611-a650-128a751641e2] Pending
helpers_test.go:344: "sp-pod" [f90a9710-8202-4611-a650-128a751641e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f90a9710-8202-4611-a650-128a751641e2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003855098s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-892312 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.44s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh -n functional-892312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cp functional-892312:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2843817258/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh -n functional-892312 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh -n functional-892312 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7728/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /etc/test/nested/copy/7728/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7728.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /etc/ssl/certs/7728.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7728.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /usr/share/ca-certificates/7728.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /etc/ssl/certs/77282.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77282.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /usr/share/ca-certificates/77282.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-892312 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh "sudo systemctl is-active crio": exit status 1 (263.338625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr]
E0422 17:05:26.075768    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41352: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-892312 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4220c392-1190-404f-ac07-f0a6462c0d89] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4220c392-1190-404f-ac07-f0a6462c0d89] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004780433s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-892312 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.120.99 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-892312 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-892312 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-892312 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-tm2fq" [fdf69580-848f-4491-b6f9-814c54d93d97] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-tm2fq" [fdf69580-848f-4491-b6f9-814c54d93d97] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.008965032s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service list -o json
functional_test.go:1490: Took "661.192827ms" to run "out/minikube-linux-arm64 -p functional-892312 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "393.909513ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "91.155388ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "400.263902ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "72.347944ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31971
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdany-port3999446854/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713805557045359005" to /tmp/TestFunctionalparallelMountCmdany-port3999446854/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713805557045359005" to /tmp/TestFunctionalparallelMountCmdany-port3999446854/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713805557045359005" to /tmp/TestFunctionalparallelMountCmdany-port3999446854/001/test-1713805557045359005
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (544.131997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 22 17:05 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 22 17:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 22 17:05 test-1713805557045359005
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh cat /mount-9p/test-1713805557045359005
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-892312 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [317f6b37-0f37-433d-845a-50aa45a41a47] Pending
helpers_test.go:344: "busybox-mount" [317f6b37-0f37-433d-845a-50aa45a41a47] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [317f6b37-0f37-433d-845a-50aa45a41a47] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [317f6b37-0f37-433d-845a-50aa45a41a47] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003811487s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-892312 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdany-port3999446854/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31971
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdspecific-port4054427625/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (497.538852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh -- ls -la /mount-9p
E0422 17:06:07.036991    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdspecific-port4054427625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh "sudo umount -f /mount-9p": exit status 1 (328.313162ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-892312 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdspecific-port4054427625/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T" /mount1: exit status 1 (878.532568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-892312 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-892312 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1062992858/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-892312 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-892312
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-892312
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-892312 image ls --format short --alsologtostderr:
I0422 17:06:29.508958   46913 out.go:291] Setting OutFile to fd 1 ...
I0422 17:06:29.509149   46913 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.509160   46913 out.go:304] Setting ErrFile to fd 2...
I0422 17:06:29.509165   46913 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.509543   46913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
I0422 17:06:29.510319   46913 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.510453   46913 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.510911   46913 cli_runner.go:164] Run: docker container inspect functional-892312 --format={{.State.Status}}
I0422 17:06:29.529013   46913 ssh_runner.go:195] Run: systemctl --version
I0422 17:06:29.529073   46913 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892312
I0422 17:06:29.548340   46913 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/functional-892312/id_rsa Username:docker}
I0422 17:06:29.637862   46913 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-892312 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.0           | 547adae34140b | 60.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.30.0           | 181f57fd3cdb7 | 112MB  |
| docker.io/library/nginx                     | latest            | a6ac09e4d8a90 | 193MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-proxy                  | v1.30.0           | cb7eac0b42cc1 | 87.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-892312 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| docker.io/library/minikube-local-cache-test | functional-892312 | 62a668c19495f | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | 68feac521c0f1 | 107MB  |
| docker.io/library/nginx                     | alpine            | 8f49f2e379605 | 49.7MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-892312 image ls --format table --alsologtostderr:
I0422 17:06:30.030805   47043 out.go:291] Setting OutFile to fd 1 ...
I0422 17:06:30.031341   47043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:30.031349   47043 out.go:304] Setting ErrFile to fd 2...
I0422 17:06:30.031354   47043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:30.032047   47043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
I0422 17:06:30.033730   47043 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:30.033916   47043 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:30.034997   47043 cli_runner.go:164] Run: docker container inspect functional-892312 --format={{.State.Status}}
I0422 17:06:30.080174   47043 ssh_runner.go:195] Run: systemctl --version
I0422 17:06:30.080255   47043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892312
I0422 17:06:30.123246   47043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/functional-892312/id_rsa Username:docker}
I0422 17:06:30.226755   47043 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-892312 image ls --format json --alsologtostderr:
[{"id":"181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"112000000"},{"id":"cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"87900000"},{"id":"547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"60500000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"014faa467e29798aeef733fe6d1
a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"62a668c19495f3878462a14f98a2bfe30748ffd9476b0edda233d248b5d85230","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-892312"],"size":"30"},{"id":"68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1","repoDigests":[],"repoTags":["registry.k8s
.io/kube-controller-manager:v1.30.0"],"size":"107000000"},{"id":"8f49f2e3796058c0b6568d610301043df2a2e84c72822ed0e2efdbcc4b653edc","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49700000"},{"id":"a6ac09e4d8a90af2fac86bcd7508777bee5261c602b5ad90b5869925a021ad12","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-892312"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-ar
m:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-892312 image ls --format json --alsologtostderr:
I0422 17:06:29.751107   46972 out.go:291] Setting OutFile to fd 1 ...
I0422 17:06:29.751255   46972 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.751278   46972 out.go:304] Setting ErrFile to fd 2...
I0422 17:06:29.751290   46972 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.751565   46972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
I0422 17:06:29.752205   46972 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.752371   46972 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.752932   46972 cli_runner.go:164] Run: docker container inspect functional-892312 --format={{.State.Status}}
I0422 17:06:29.769335   46972 ssh_runner.go:195] Run: systemctl --version
I0422 17:06:29.769417   46972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892312
I0422 17:06:29.800581   46972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/functional-892312/id_rsa Username:docker}
I0422 17:06:29.893334   46972 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-892312 image ls --format yaml --alsologtostderr:
- id: cb7eac0b42cc1efe8ef8d69652c7c0babbf9ab418daca7fe90ddb8b1ab68389f
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "87900000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 68feac521c0f104bef927614ce0960d6fcddf98bd42f039c98b7d4a82294d6f1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "107000000"
- id: 8f49f2e3796058c0b6568d610301043df2a2e84c72822ed0e2efdbcc4b653edc
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49700000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 547adae34140be47cdc0d9f3282b6184ef76154c44cf43fc7edd0685e61ab73a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "60500000"
- id: a6ac09e4d8a90af2fac86bcd7508777bee5261c602b5ad90b5869925a021ad12
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-892312
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 62a668c19495f3878462a14f98a2bfe30748ffd9476b0edda233d248b5d85230
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-892312
size: "30"
- id: 181f57fd3cdb796d3b94d5a1c86bf48ec261d75965d1b7c328f1d7c11f79f0bb
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "112000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-892312 image ls --format yaml --alsologtostderr:
I0422 17:06:29.513185   46914 out.go:291] Setting OutFile to fd 1 ...
I0422 17:06:29.513410   46914 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.513437   46914 out.go:304] Setting ErrFile to fd 2...
I0422 17:06:29.513455   46914 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:29.513735   46914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
I0422 17:06:29.514446   46914 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.514615   46914 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:29.515117   46914 cli_runner.go:164] Run: docker container inspect functional-892312 --format={{.State.Status}}
I0422 17:06:29.532305   46914 ssh_runner.go:195] Run: systemctl --version
I0422 17:06:29.532351   46914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892312
I0422 17:06:29.548632   46914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/functional-892312/id_rsa Username:docker}
I0422 17:06:29.638499   46914 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-892312 ssh pgrep buildkitd: exit status 1 (416.027553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image build -t localhost/my-image:functional-892312 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image build -t localhost/my-image:functional-892312 testdata/build --alsologtostderr: (2.174525657s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-892312 image build -t localhost/my-image:functional-892312 testdata/build --alsologtostderr:
I0422 17:06:30.171953   47059 out.go:291] Setting OutFile to fd 1 ...
I0422 17:06:30.172202   47059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:30.172209   47059 out.go:304] Setting ErrFile to fd 2...
I0422 17:06:30.172215   47059 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0422 17:06:30.173613   47059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
I0422 17:06:30.174357   47059 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:30.175156   47059 config.go:182] Loaded profile config "functional-892312": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0422 17:06:30.175730   47059 cli_runner.go:164] Run: docker container inspect functional-892312 --format={{.State.Status}}
I0422 17:06:30.194535   47059 ssh_runner.go:195] Run: systemctl --version
I0422 17:06:30.194588   47059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-892312
I0422 17:06:30.211325   47059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/functional-892312/id_rsa Username:docker}
I0422 17:06:30.305056   47059 build_images.go:161] Building image from path: /tmp/build.4148120595.tar
I0422 17:06:30.305132   47059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0422 17:06:30.315453   47059 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4148120595.tar
I0422 17:06:30.319174   47059 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4148120595.tar: stat -c "%s %y" /var/lib/minikube/build/build.4148120595.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4148120595.tar': No such file or directory
I0422 17:06:30.319204   47059 ssh_runner.go:362] scp /tmp/build.4148120595.tar --> /var/lib/minikube/build/build.4148120595.tar (3072 bytes)
I0422 17:06:30.346760   47059 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4148120595
I0422 17:06:30.356054   47059 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4148120595 -xf /var/lib/minikube/build/build.4148120595.tar
I0422 17:06:30.365757   47059 docker.go:360] Building image: /var/lib/minikube/build/build.4148120595
I0422 17:06:30.365861   47059 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-892312 /var/lib/minikube/build/build.4148120595
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:831438b9cb0e57696a23715affdea4e598eb363b799e06fa1aecb94f60e3fddc done
#8 naming to localhost/my-image:functional-892312 done
#8 DONE 0.1s
I0422 17:06:32.230107   47059 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-892312 /var/lib/minikube/build/build.4148120595: (1.864218333s)
I0422 17:06:32.230171   47059 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4148120595
I0422 17:06:32.239875   47059 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4148120595.tar
I0422 17:06:32.248883   47059 build_images.go:217] Built localhost/my-image:functional-892312 from /tmp/build.4148120595.tar
I0422 17:06:32.248913   47059 build_images.go:133] succeeded building to: functional-892312
I0422 17:06:32.248919   47059 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/04/22 17:06:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.655970957s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-892312
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-892312 docker-env) && out/minikube-linux-arm64 status -p functional-892312"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-892312 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr: (4.114001084s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr: (2.591851193s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.295204381s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-892312
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image load --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr: (3.346430174s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image save gcr.io/google-containers/addon-resizer:functional-892312 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image rm gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.04304227s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-892312
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-892312 image save --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-892312 image save --daemon gcr.io/google-containers/addon-resizer:functional-892312 --alsologtostderr: (1.010437905s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-892312
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-892312
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-892312
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-892312
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-593417 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0422 17:07:28.957969    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-593417 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m8.073997159s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (57.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-593417 -- rollout status deployment/busybox: (3.917798569s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-kpts2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-th29d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-xwc4b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-kpts2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-th29d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-xwc4b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-kpts2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-th29d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-xwc4b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (57.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-kpts2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-kpts2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-th29d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-th29d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-xwc4b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-593417 -- exec busybox-fc5497c4f-xwc4b -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-593417 -v=7 --alsologtostderr
E0422 17:09:45.113421    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-593417 -v=7 --alsologtostderr: (24.565117803s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr: (1.012153326s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-593417 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp testdata/cp-test.txt ha-593417:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2232790980/001/cp-test_ha-593417.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417:/home/docker/cp-test.txt ha-593417-m02:/home/docker/cp-test_ha-593417_ha-593417-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test.txt"
E0422 17:10:12.798394    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test_ha-593417_ha-593417-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417:/home/docker/cp-test.txt ha-593417-m03:/home/docker/cp-test_ha-593417_ha-593417-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test_ha-593417_ha-593417-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417:/home/docker/cp-test.txt ha-593417-m04:/home/docker/cp-test_ha-593417_ha-593417-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test_ha-593417_ha-593417-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp testdata/cp-test.txt ha-593417-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2232790980/001/cp-test_ha-593417-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m02:/home/docker/cp-test.txt ha-593417:/home/docker/cp-test_ha-593417-m02_ha-593417.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test_ha-593417-m02_ha-593417.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m02:/home/docker/cp-test.txt ha-593417-m03:/home/docker/cp-test_ha-593417-m02_ha-593417-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test_ha-593417-m02_ha-593417-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m02:/home/docker/cp-test.txt ha-593417-m04:/home/docker/cp-test_ha-593417-m02_ha-593417-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test_ha-593417-m02_ha-593417-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp testdata/cp-test.txt ha-593417-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2232790980/001/cp-test_ha-593417-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m03:/home/docker/cp-test.txt ha-593417:/home/docker/cp-test_ha-593417-m03_ha-593417.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test_ha-593417-m03_ha-593417.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m03:/home/docker/cp-test.txt ha-593417-m02:/home/docker/cp-test_ha-593417-m03_ha-593417-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test_ha-593417-m03_ha-593417-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m03:/home/docker/cp-test.txt ha-593417-m04:/home/docker/cp-test_ha-593417-m03_ha-593417-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test_ha-593417-m03_ha-593417-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp testdata/cp-test.txt ha-593417-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2232790980/001/cp-test_ha-593417-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m04:/home/docker/cp-test.txt ha-593417:/home/docker/cp-test_ha-593417-m04_ha-593417.txt
E0422 17:10:26.633422    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:26.638685    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:26.648927    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:26.669304    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test.txt"
E0422 17:10:26.709735    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:26.789976    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:26.950449    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417 "sudo cat /home/docker/cp-test_ha-593417-m04_ha-593417.txt"
E0422 17:10:27.271208    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m04:/home/docker/cp-test.txt ha-593417-m02:/home/docker/cp-test_ha-593417-m04_ha-593417-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test.txt"
E0422 17:10:27.911857    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m02 "sudo cat /home/docker/cp-test_ha-593417-m04_ha-593417-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 cp ha-593417-m04:/home/docker/cp-test.txt ha-593417-m03:/home/docker/cp-test_ha-593417-m04_ha-593417-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m04 "sudo cat /home/docker/cp-test.txt"
E0422 17:10:29.192708    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 ssh -n ha-593417-m03 "sudo cat /home/docker/cp-test_ha-593417-m04_ha-593417-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 node stop m02 -v=7 --alsologtostderr
E0422 17:10:31.753019    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:10:36.873555    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 node stop m02 -v=7 --alsologtostderr: (10.982418513s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr: exit status 7 (700.008877ms)

                                                
                                                
-- stdout --
	ha-593417
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-593417-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-593417-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-593417-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:10:40.577119   68045 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:10:40.577310   68045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:10:40.577336   68045 out.go:304] Setting ErrFile to fd 2...
	I0422 17:10:40.577361   68045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:10:40.577742   68045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:10:40.578066   68045 out.go:298] Setting JSON to false
	I0422 17:10:40.578127   68045 mustload.go:65] Loading cluster: ha-593417
	I0422 17:10:40.578913   68045 config.go:182] Loaded profile config "ha-593417": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:10:40.578961   68045 status.go:255] checking status of ha-593417 ...
	I0422 17:10:40.579136   68045 notify.go:220] Checking for updates...
	I0422 17:10:40.580911   68045 cli_runner.go:164] Run: docker container inspect ha-593417 --format={{.State.Status}}
	I0422 17:10:40.599446   68045 status.go:330] ha-593417 host status = "Running" (err=<nil>)
	I0422 17:10:40.599481   68045 host.go:66] Checking if "ha-593417" exists ...
	I0422 17:10:40.599882   68045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-593417
	I0422 17:10:40.617412   68045 host.go:66] Checking if "ha-593417" exists ...
	I0422 17:10:40.617733   68045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:10:40.617789   68045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-593417
	I0422 17:10:40.639499   68045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/ha-593417/id_rsa Username:docker}
	I0422 17:10:40.730334   68045 ssh_runner.go:195] Run: systemctl --version
	I0422 17:10:40.734655   68045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:10:40.746868   68045 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 17:10:40.801312   68045 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-04-22 17:10:40.787742138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 17:10:40.802059   68045 kubeconfig.go:125] found "ha-593417" server: "https://192.168.49.254:8443"
	I0422 17:10:40.802094   68045 api_server.go:166] Checking apiserver status ...
	I0422 17:10:40.802141   68045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:10:40.813719   68045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2160/cgroup
	I0422 17:10:40.823942   68045 api_server.go:182] apiserver freezer: "2:freezer:/docker/0071b7196a0d63586414ce01b53700b17a97b1446362d2f75ac783437780fa51/kubepods/burstable/pod5a27191c9b1ec3f6b0aaf1070e4f55ca/edde9824e94cde95d1527f79af968ca6cd555d96da819482f379c8ef9568e7c1"
	I0422 17:10:40.824097   68045 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0071b7196a0d63586414ce01b53700b17a97b1446362d2f75ac783437780fa51/kubepods/burstable/pod5a27191c9b1ec3f6b0aaf1070e4f55ca/edde9824e94cde95d1527f79af968ca6cd555d96da819482f379c8ef9568e7c1/freezer.state
	I0422 17:10:40.833448   68045 api_server.go:204] freezer state: "THAWED"
	I0422 17:10:40.833484   68045 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0422 17:10:40.841448   68045 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0422 17:10:40.841479   68045 status.go:422] ha-593417 apiserver status = Running (err=<nil>)
	I0422 17:10:40.841491   68045 status.go:257] ha-593417 status: &{Name:ha-593417 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:10:40.841510   68045 status.go:255] checking status of ha-593417-m02 ...
	I0422 17:10:40.841847   68045 cli_runner.go:164] Run: docker container inspect ha-593417-m02 --format={{.State.Status}}
	I0422 17:10:40.857471   68045 status.go:330] ha-593417-m02 host status = "Stopped" (err=<nil>)
	I0422 17:10:40.857493   68045 status.go:343] host is not running, skipping remaining checks
	I0422 17:10:40.857500   68045 status.go:257] ha-593417-m02 status: &{Name:ha-593417-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:10:40.857521   68045 status.go:255] checking status of ha-593417-m03 ...
	I0422 17:10:40.857838   68045 cli_runner.go:164] Run: docker container inspect ha-593417-m03 --format={{.State.Status}}
	I0422 17:10:40.875758   68045 status.go:330] ha-593417-m03 host status = "Running" (err=<nil>)
	I0422 17:10:40.875786   68045 host.go:66] Checking if "ha-593417-m03" exists ...
	I0422 17:10:40.876094   68045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-593417-m03
	I0422 17:10:40.896288   68045 host.go:66] Checking if "ha-593417-m03" exists ...
	I0422 17:10:40.896620   68045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:10:40.896674   68045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-593417-m03
	I0422 17:10:40.913205   68045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/ha-593417-m03/id_rsa Username:docker}
	I0422 17:10:41.005325   68045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:10:41.018862   68045 kubeconfig.go:125] found "ha-593417" server: "https://192.168.49.254:8443"
	I0422 17:10:41.018892   68045 api_server.go:166] Checking apiserver status ...
	I0422 17:10:41.018939   68045 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:10:41.030285   68045 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2130/cgroup
	I0422 17:10:41.040053   68045 api_server.go:182] apiserver freezer: "2:freezer:/docker/b3e0d0eda15837cdda5f0a02dc0ee52819b553b5fbd7438b16e9b09b94213143/kubepods/burstable/pod4217c4d96693e244d2b76502bae96db9/d2bd94a9f810c0c9a3ebb5605a440e321d5244d4706d7fa2f1714ab424b2c87b"
	I0422 17:10:41.040126   68045 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b3e0d0eda15837cdda5f0a02dc0ee52819b553b5fbd7438b16e9b09b94213143/kubepods/burstable/pod4217c4d96693e244d2b76502bae96db9/d2bd94a9f810c0c9a3ebb5605a440e321d5244d4706d7fa2f1714ab424b2c87b/freezer.state
	I0422 17:10:41.050287   68045 api_server.go:204] freezer state: "THAWED"
	I0422 17:10:41.050318   68045 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0422 17:10:41.058097   68045 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0422 17:10:41.058124   68045 status.go:422] ha-593417-m03 apiserver status = Running (err=<nil>)
	I0422 17:10:41.058134   68045 status.go:257] ha-593417-m03 status: &{Name:ha-593417-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:10:41.058151   68045 status.go:255] checking status of ha-593417-m04 ...
	I0422 17:10:41.058492   68045 cli_runner.go:164] Run: docker container inspect ha-593417-m04 --format={{.State.Status}}
	I0422 17:10:41.073616   68045 status.go:330] ha-593417-m04 host status = "Running" (err=<nil>)
	I0422 17:10:41.073643   68045 host.go:66] Checking if "ha-593417-m04" exists ...
	I0422 17:10:41.073942   68045 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-593417-m04
	I0422 17:10:41.089450   68045 host.go:66] Checking if "ha-593417-m04" exists ...
	I0422 17:10:41.089759   68045 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:10:41.089814   68045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-593417-m04
	I0422 17:10:41.106102   68045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/ha-593417-m04/id_rsa Username:docker}
	I0422 17:10:41.194096   68045 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:10:41.206573   68045 status.go:257] ha-593417-m04 status: &{Name:ha-593417-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (40s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 node start m02 -v=7 --alsologtostderr
E0422 17:10:47.114743    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:11:07.594880    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 node start m02 -v=7 --alsologtostderr: (38.392807864s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr: (1.487681494s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (40.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.989080201s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-593417 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-593417 -v=7 --alsologtostderr
E0422 17:11:48.555358    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-593417 -v=7 --alsologtostderr: (34.076967779s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-593417 --wait=true -v=7 --alsologtostderr
E0422 17:13:10.476499    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:14:45.112049    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:15:26.633533    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-593417 --wait=true -v=7 --alsologtostderr: (3m27.169185449s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-593417
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (241.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 node delete m03 -v=7 --alsologtostderr: (11.718759405s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 stop -v=7 --alsologtostderr
E0422 17:15:54.316893    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 stop -v=7 --alsologtostderr: (32.832700672s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr: exit status 7 (116.937241ms)

                                                
                                                
-- stdout --
	ha-593417
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-593417-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-593417-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:16:14.321402   93491 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:16:14.321602   93491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:14.321631   93491 out.go:304] Setting ErrFile to fd 2...
	I0422 17:16:14.321652   93491 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:16:14.321919   93491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:16:14.322134   93491 out.go:298] Setting JSON to false
	I0422 17:16:14.322190   93491 mustload.go:65] Loading cluster: ha-593417
	I0422 17:16:14.322254   93491 notify.go:220] Checking for updates...
	I0422 17:16:14.322700   93491 config.go:182] Loaded profile config "ha-593417": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:16:14.322722   93491 status.go:255] checking status of ha-593417 ...
	I0422 17:16:14.323212   93491 cli_runner.go:164] Run: docker container inspect ha-593417 --format={{.State.Status}}
	I0422 17:16:14.340433   93491 status.go:330] ha-593417 host status = "Stopped" (err=<nil>)
	I0422 17:16:14.340456   93491 status.go:343] host is not running, skipping remaining checks
	I0422 17:16:14.340464   93491 status.go:257] ha-593417 status: &{Name:ha-593417 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:16:14.340488   93491 status.go:255] checking status of ha-593417-m02 ...
	I0422 17:16:14.340818   93491 cli_runner.go:164] Run: docker container inspect ha-593417-m02 --format={{.State.Status}}
	I0422 17:16:14.356590   93491 status.go:330] ha-593417-m02 host status = "Stopped" (err=<nil>)
	I0422 17:16:14.356614   93491 status.go:343] host is not running, skipping remaining checks
	I0422 17:16:14.356622   93491 status.go:257] ha-593417-m02 status: &{Name:ha-593417-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:16:14.356642   93491 status.go:255] checking status of ha-593417-m04 ...
	I0422 17:16:14.356999   93491 cli_runner.go:164] Run: docker container inspect ha-593417-m04 --format={{.State.Status}}
	I0422 17:16:14.371191   93491 status.go:330] ha-593417-m04 host status = "Stopped" (err=<nil>)
	I0422 17:16:14.371214   93491 status.go:343] host is not running, skipping remaining checks
	I0422 17:16:14.371222   93491 status.go:257] ha-593417-m04 status: &{Name:ha-593417-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-593417 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-593417 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.476307146s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (47.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-593417 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-593417 --control-plane -v=7 --alsologtostderr: (46.888693686s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-593417 status -v=7 --alsologtostderr: (1.079139721s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (47.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-144755 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-144755 --driver=docker  --container-runtime=docker: (31.6730825s)
--- PASS: TestImageBuild/serial/Setup (31.67s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-144755
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-144755: (2.116568461s)
--- PASS: TestImageBuild/serial/NormalBuild (2.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-144755
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-144755
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-144755: (1.019265671s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-144755
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-117713 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0422 17:19:45.109243    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-117713 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (48.686774383s)
--- PASS: TestJSONOutput/start/Command (48.69s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-117713 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-117713 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-117713 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-117713 --output=json --user=testUser: (5.712138927s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-197368 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-197368 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.045307ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"173de196-3524-4da2-be3d-f86eff9ac8ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-197368] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02aa6123-d6f6-410c-8b6b-39f6d7ddd405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18706"}}
	{"specversion":"1.0","id":"290d5264-3b52-4645-b180-d0f58551dc48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed8b8734-397a-482e-be69-945846d597cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig"}}
	{"specversion":"1.0","id":"dfd5189c-5f46-4731-b3a8-a5aa5df982b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube"}}
	{"specversion":"1.0","id":"fa186145-1069-4e24-bbdc-c8b9eb2f2e72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e055baef-f7dc-460d-9558-0d76ea6f73ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bf4bbfff-052b-43c4-b4e3-8406f05d056a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-197368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-197368
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.4s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-790656 --network=
E0422 17:20:26.633092    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-790656 --network=: (35.066454627s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-790656" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-790656
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-790656: (2.311171133s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.40s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-603552 --network=bridge
E0422 17:21:08.159415    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-603552 --network=bridge: (33.128741511s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-603552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-603552
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-603552: (1.992672021s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.15s)

                                                
                                    
x
+
TestKicExistingNetwork (34.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-681711 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-681711 --network=existing-network: (32.845013281s)
helpers_test.go:175: Cleaning up "existing-network-681711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-681711
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-681711: (2.007139579s)
--- PASS: TestKicExistingNetwork (34.99s)

                                                
                                    
x
+
TestKicCustomSubnet (37.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-415366 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-415366 --subnet=192.168.60.0/24: (35.468965832s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-415366 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-415366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-415366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-415366: (2.139528251s)
--- PASS: TestKicCustomSubnet (37.62s)

                                                
                                    
x
+
TestKicStaticIP (37.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-526370 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-526370 --static-ip=192.168.200.200: (34.799603788s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-526370 ip
helpers_test.go:175: Cleaning up "static-ip-526370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-526370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-526370: (2.047984294s)
--- PASS: TestKicStaticIP (37.01s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (68.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-602136 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-602136 --driver=docker  --container-runtime=docker: (28.318704069s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-605229 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-605229 --driver=docker  --container-runtime=docker: (34.405432422s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-602136
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-605229
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-605229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-605229
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-605229: (2.059853968s)
helpers_test.go:175: Cleaning up "first-602136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-602136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-602136: (2.074423762s)
--- PASS: TestMinikubeProfile (68.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-808501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-808501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.85718992s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-808501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-821185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0422 17:24:45.111576    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-821185 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.180195755s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-821185 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-808501 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-808501 --alsologtostderr -v=5: (1.454722749s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-821185 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-821185
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-821185: (1.206519891s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-821185
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-821185: (7.674775612s)
--- PASS: TestMountStart/serial/RestartStopped (8.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-821185 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (84.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214155 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0422 17:25:26.632903    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214155 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.748892012s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (84.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (55.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-214155 -- rollout status deployment/busybox: (3.300603166s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0422 17:26:49.677591    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-7cr5p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-m6dsf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-7cr5p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-m6dsf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-7cr5p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-m6dsf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (55.61s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-7cr5p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-7cr5p -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-m6dsf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-214155 -- exec busybox-fc5497c4f-m6dsf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-214155 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-214155 -v 3 --alsologtostderr: (18.036559385s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-214155 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp testdata/cp-test.txt multinode-214155:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053937616/001/cp-test_multinode-214155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155:/home/docker/cp-test.txt multinode-214155-m02:/home/docker/cp-test_multinode-214155_multinode-214155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test_multinode-214155_multinode-214155-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155:/home/docker/cp-test.txt multinode-214155-m03:/home/docker/cp-test_multinode-214155_multinode-214155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test_multinode-214155_multinode-214155-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp testdata/cp-test.txt multinode-214155-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053937616/001/cp-test_multinode-214155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m02:/home/docker/cp-test.txt multinode-214155:/home/docker/cp-test_multinode-214155-m02_multinode-214155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test_multinode-214155-m02_multinode-214155.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m02:/home/docker/cp-test.txt multinode-214155-m03:/home/docker/cp-test_multinode-214155-m02_multinode-214155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test_multinode-214155-m02_multinode-214155-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp testdata/cp-test.txt multinode-214155-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3053937616/001/cp-test_multinode-214155-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m03:/home/docker/cp-test.txt multinode-214155:/home/docker/cp-test_multinode-214155-m03_multinode-214155.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155 "sudo cat /home/docker/cp-test_multinode-214155-m03_multinode-214155.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 cp multinode-214155-m03:/home/docker/cp-test.txt multinode-214155-m02:/home/docker/cp-test_multinode-214155-m03_multinode-214155-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 ssh -n multinode-214155-m02 "sudo cat /home/docker/cp-test_multinode-214155-m03_multinode-214155-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-214155 node stop m03: (1.219322562s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214155 status: exit status 7 (502.247756ms)

                                                
                                                
-- stdout --
	multinode-214155
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214155-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214155-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr: exit status 7 (517.864133ms)

                                                
                                                
-- stdout --
	multinode-214155
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-214155-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-214155-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:27:55.463716  162129 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:27:55.463925  162129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:27:55.463956  162129 out.go:304] Setting ErrFile to fd 2...
	I0422 17:27:55.463975  162129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:27:55.464282  162129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:27:55.464499  162129 out.go:298] Setting JSON to false
	I0422 17:27:55.464557  162129 mustload.go:65] Loading cluster: multinode-214155
	I0422 17:27:55.464677  162129 notify.go:220] Checking for updates...
	I0422 17:27:55.465096  162129 config.go:182] Loaded profile config "multinode-214155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:27:55.465133  162129 status.go:255] checking status of multinode-214155 ...
	I0422 17:27:55.465706  162129 cli_runner.go:164] Run: docker container inspect multinode-214155 --format={{.State.Status}}
	I0422 17:27:55.483232  162129 status.go:330] multinode-214155 host status = "Running" (err=<nil>)
	I0422 17:27:55.483261  162129 host.go:66] Checking if "multinode-214155" exists ...
	I0422 17:27:55.483539  162129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-214155
	I0422 17:27:55.500225  162129 host.go:66] Checking if "multinode-214155" exists ...
	I0422 17:27:55.500726  162129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:27:55.500869  162129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-214155
	I0422 17:27:55.530372  162129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/multinode-214155/id_rsa Username:docker}
	I0422 17:27:55.621922  162129 ssh_runner.go:195] Run: systemctl --version
	I0422 17:27:55.626408  162129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:27:55.639518  162129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0422 17:27:55.701899  162129 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-04-22 17:27:55.692331329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1058-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215101440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.26.1]] Warnings:<nil>}}
	I0422 17:27:55.702559  162129 kubeconfig.go:125] found "multinode-214155" server: "https://192.168.67.2:8443"
	I0422 17:27:55.702601  162129 api_server.go:166] Checking apiserver status ...
	I0422 17:27:55.702649  162129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0422 17:27:55.714382  162129 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2099/cgroup
	I0422 17:27:55.723638  162129 api_server.go:182] apiserver freezer: "2:freezer:/docker/c99643850898204a4b2d979d34a798c6717233f95226b26f55bbb83005be8980/kubepods/burstable/pod0474c52fe498509a049f2f6807f7ade7/59f7ab72be1fdbed3b2c760a7cb2623620411c2f7445413efc8591ab3a60448f"
	I0422 17:27:55.723709  162129 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c99643850898204a4b2d979d34a798c6717233f95226b26f55bbb83005be8980/kubepods/burstable/pod0474c52fe498509a049f2f6807f7ade7/59f7ab72be1fdbed3b2c760a7cb2623620411c2f7445413efc8591ab3a60448f/freezer.state
	I0422 17:27:55.732557  162129 api_server.go:204] freezer state: "THAWED"
	I0422 17:27:55.732588  162129 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0422 17:27:55.740549  162129 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0422 17:27:55.740579  162129 status.go:422] multinode-214155 apiserver status = Running (err=<nil>)
	I0422 17:27:55.740591  162129 status.go:257] multinode-214155 status: &{Name:multinode-214155 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:27:55.740608  162129 status.go:255] checking status of multinode-214155-m02 ...
	I0422 17:27:55.740981  162129 cli_runner.go:164] Run: docker container inspect multinode-214155-m02 --format={{.State.Status}}
	I0422 17:27:55.759211  162129 status.go:330] multinode-214155-m02 host status = "Running" (err=<nil>)
	I0422 17:27:55.759236  162129 host.go:66] Checking if "multinode-214155-m02" exists ...
	I0422 17:27:55.759537  162129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-214155-m02
	I0422 17:27:55.780109  162129 host.go:66] Checking if "multinode-214155-m02" exists ...
	I0422 17:27:55.780427  162129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0422 17:27:55.780474  162129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-214155-m02
	I0422 17:27:55.800564  162129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/18706-2371/.minikube/machines/multinode-214155-m02/id_rsa Username:docker}
	I0422 17:27:55.890782  162129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0422 17:27:55.904002  162129 status.go:257] multinode-214155-m02 status: &{Name:multinode-214155-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:27:55.904036  162129 status.go:255] checking status of multinode-214155-m03 ...
	I0422 17:27:55.904331  162129 cli_runner.go:164] Run: docker container inspect multinode-214155-m03 --format={{.State.Status}}
	I0422 17:27:55.920065  162129 status.go:330] multinode-214155-m03 host status = "Stopped" (err=<nil>)
	I0422 17:27:55.920088  162129 status.go:343] host is not running, skipping remaining checks
	I0422 17:27:55.920111  162129 status.go:257] multinode-214155-m03 status: &{Name:multinode-214155-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-214155 node start m03 -v=7 --alsologtostderr: (10.107587802s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (88.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214155
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-214155
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-214155: (22.492344716s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214155 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214155 --wait=true -v=8 --alsologtostderr: (1m5.405514022s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214155
--- PASS: TestMultiNode/serial/RestartKeepsNodes (88.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-214155 node delete m03: (4.758387615s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 stop
E0422 17:29:45.110404    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-214155 stop: (21.528446303s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214155 status: exit status 7 (91.221135ms)

                                                
                                                
-- stdout --
	multinode-214155
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214155-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr: exit status 7 (92.430467ms)

                                                
                                                
-- stdout --
	multinode-214155
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-214155-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0422 17:30:01.961525  174216 out.go:291] Setting OutFile to fd 1 ...
	I0422 17:30:01.961671  174216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:30:01.961683  174216 out.go:304] Setting ErrFile to fd 2...
	I0422 17:30:01.961688  174216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0422 17:30:01.961947  174216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18706-2371/.minikube/bin
	I0422 17:30:01.962137  174216 out.go:298] Setting JSON to false
	I0422 17:30:01.962170  174216 mustload.go:65] Loading cluster: multinode-214155
	I0422 17:30:01.962269  174216 notify.go:220] Checking for updates...
	I0422 17:30:01.962630  174216 config.go:182] Loaded profile config "multinode-214155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0422 17:30:01.962644  174216 status.go:255] checking status of multinode-214155 ...
	I0422 17:30:01.963149  174216 cli_runner.go:164] Run: docker container inspect multinode-214155 --format={{.State.Status}}
	I0422 17:30:01.978113  174216 status.go:330] multinode-214155 host status = "Stopped" (err=<nil>)
	I0422 17:30:01.978138  174216 status.go:343] host is not running, skipping remaining checks
	I0422 17:30:01.978145  174216 status.go:257] multinode-214155 status: &{Name:multinode-214155 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0422 17:30:01.978215  174216 status.go:255] checking status of multinode-214155-m02 ...
	I0422 17:30:01.978591  174216 cli_runner.go:164] Run: docker container inspect multinode-214155-m02 --format={{.State.Status}}
	I0422 17:30:01.993559  174216 status.go:330] multinode-214155-m02 host status = "Stopped" (err=<nil>)
	I0422 17:30:01.993584  174216 status.go:343] host is not running, skipping remaining checks
	I0422 17:30:01.993592  174216 status.go:257] multinode-214155-m02 status: &{Name:multinode-214155-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214155 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0422 17:30:26.633308    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214155 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.640011409s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-214155 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-214155
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214155-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-214155-m02 --driver=docker  --container-runtime=docker: exit status 14 (86.030401ms)

                                                
                                                
-- stdout --
	* [multinode-214155-m02] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-214155-m02' is duplicated with machine name 'multinode-214155-m02' in profile 'multinode-214155'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-214155-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-214155-m03 --driver=docker  --container-runtime=docker: (35.079344048s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-214155
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-214155: exit status 80 (316.280376ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-214155 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-214155-m03 already exists in multinode-214155-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-214155-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-214155-m03: (2.056902073s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.60s)

                                                
                                    
x
+
TestPreload (140.42s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-184960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-184960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.011797313s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-184960 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-184960 image pull gcr.io/k8s-minikube/busybox: (1.386241908s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-184960
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-184960: (11.094502967s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-184960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-184960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.318111247s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-184960 image list
helpers_test.go:175: Cleaning up "test-preload-184960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-184960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-184960: (2.32741695s)
--- PASS: TestPreload (140.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-903499 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-903499 --memory=2048 --driver=docker  --container-runtime=docker: (34.196561255s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-903499 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-903499 -n scheduled-stop-903499
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-903499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-903499 --cancel-scheduled
E0422 17:34:45.109688    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-903499 -n scheduled-stop-903499
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-903499
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-903499 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0422 17:35:26.632961    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-903499
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-903499: exit status 7 (71.747299ms)

                                                
                                                
-- stdout --
	scheduled-stop-903499
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-903499 -n scheduled-stop-903499
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-903499 -n scheduled-stop-903499: exit status 7 (78.023704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-903499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-903499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-903499: (1.645455925s)
--- PASS: TestScheduledStopUnix (107.43s)

                                                
                                    
x
+
TestSkaffold (117.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe308724097 version
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-819699 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-819699 --memory=2600 --driver=docker  --container-runtime=docker: (30.907171145s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe308724097 run --minikube-profile skaffold-819699 --kube-context skaffold-819699 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe308724097 run --minikube-profile skaffold-819699 --kube-context skaffold-819699 --status-check=true --port-forward=false --interactive=false: (1m10.151586683s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-74699d9944-vnjvn" [93dffcfb-6127-46ca-ac93-6fe1f4507a01] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003907894s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7c89548ddd-8vmkg" [f20de160-aa7c-411b-b8aa-c43e2618b0ee] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003782911s
helpers_test.go:175: Cleaning up "skaffold-819699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-819699
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-819699: (3.064009235s)
--- PASS: TestSkaffold (117.22s)

                                                
                                    
x
+
TestInsufficientStorage (12.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-866366 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0422 17:37:48.159697    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-866366 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.75027196s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4394e52a-3be2-48cc-8d36-70eab6e696b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-866366] minikube v1.33.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"97b414b0-2e07-4f41-aa49-a6b43f9c92f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18706"}}
	{"specversion":"1.0","id":"c4535639-2702-435c-9d33-90ad66f6adb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c17204b-63b2-4148-b0dc-4d337f7fd683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig"}}
	{"specversion":"1.0","id":"879a1acd-61b5-413a-892e-7cca0b311995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube"}}
	{"specversion":"1.0","id":"0302f72f-126f-4a17-a048-ce3f112100a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"77394a9f-d727-4b78-a944-8d0592510055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"df08d086-163c-4c3d-8405-c96ebabb16eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5bffb379-09d5-4e40-8de1-a19ba0c81493","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"68edd505-d50d-4b5d-b966-39c2ec4c5ebf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c82d678-1dee-44a1-87a6-1828284634c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4418bf5b-41d2-44b4-b9f1-d4276ac77211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-866366\" primary control-plane node in \"insufficient-storage-866366\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3972acca-42a7-4153-b37c-3300421edebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.43-1713736339-18706 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8f198fc0-31c7-4fe8-80eb-ece925e847e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"225d70cd-80af-4fc1-b471-fc150e983dde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-866366 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-866366 --output=json --layout=cluster: exit status 7 (291.886795ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866366","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866366","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:37:56.000583  206256 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-866366" does not appear in /home/jenkins/minikube-integration/18706-2371/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-866366 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-866366 --output=json --layout=cluster: exit status 7 (289.489885ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866366","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866366","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0422 17:37:56.297909  206310 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-866366" does not appear in /home/jenkins/minikube-integration/18706-2371/kubeconfig
	E0422 17:37:56.308174  206310 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/insufficient-storage-866366/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-866366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-866366
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-866366: (1.689882795s)
--- PASS: TestInsufficientStorage (12.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (95.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1571038004 start -p running-upgrade-265155 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0422 17:45:26.632675    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1571038004 start -p running-upgrade-265155 --memory=2200 --vm-driver=docker  --container-runtime=docker: (41.285402419s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-265155 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-265155 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.226550388s)
helpers_test.go:175: Cleaning up "running-upgrade-265155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-265155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-265155: (2.193252004s)
--- PASS: TestRunningBinaryUpgrade (95.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (370.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0422 17:44:45.108999    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:45:15.735204    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (59.024699124s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-591305
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-591305: (1.61871486s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-591305 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-591305 status --format={{.Host}}: exit status 7 (156.886037ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m44.23514818s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-591305 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (88.915714ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-591305] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-591305
	    minikube start -p kubernetes-upgrade-591305 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5913052 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-591305 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0422 17:50:26.632937    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-591305 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.634421421s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-591305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-591305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-591305: (2.469594962s)
--- PASS: TestKubernetesUpgrade (370.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (115.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.162644973 start -p missing-upgrade-966159 --memory=2200 --driver=docker  --container-runtime=docker
E0422 17:43:29.678640    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:43:53.814303    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.162644973 start -p missing-upgrade-966159 --memory=2200 --driver=docker  --container-runtime=docker: (37.546188438s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-966159
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-966159: (10.340856706s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-966159
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-966159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-966159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.723175524s)
helpers_test.go:175: Cleaning up "missing-upgrade-966159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-966159
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-966159: (2.372666388s)
--- PASS: TestMissingContainerUpgrade (115.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (128.309117ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-327292] minikube v1.33.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18706
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18706-2371/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18706-2371/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327292 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327292 --driver=docker  --container-runtime=docker: (45.643205175s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-327292 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --driver=docker  --container-runtime=docker: (5.849051869s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-327292 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-327292 status -o json: exit status 2 (366.467488ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-327292","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-327292
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-327292: (1.962175776s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327292 --no-kubernetes --driver=docker  --container-runtime=docker: (10.550129619s)
--- PASS: TestNoKubernetes/serial/Start (10.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-327292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-327292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (292.288587ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-327292
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-327292: (1.246963123s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-327292 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-327292 --driver=docker  --container-runtime=docker: (7.386631736s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-327292 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-327292 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.410663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (113.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3475710768 start -p stopped-upgrade-570968 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0422 17:42:31.894121    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:31.899369    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:31.909597    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:31.929900    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:31.970123    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:32.050354    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:32.210662    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:32.530895    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:33.171310    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:34.451538    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:37.012217    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:42:42.133375    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3475710768 start -p stopped-upgrade-570968 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m13.046322313s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3475710768 -p stopped-upgrade-570968 stop
E0422 17:42:52.373551    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3475710768 -p stopped-upgrade-570968 stop: (11.021923476s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-570968 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0422 17:43:12.853750    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-570968 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.279372203s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (113.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-570968
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-570968: (1.428494681s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestPause/serial/Start (88.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-065490 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0422 17:47:31.894037    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:47:59.575539    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-065490 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m28.462962881s)
--- PASS: TestPause/serial/Start (88.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.21s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-065490 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-065490 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.199749838s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-065490 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-065490 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-065490 --output=json --layout=cluster: exit status 2 (391.839599ms)

                                                
                                                
-- stdout --
	{"Name":"pause-065490","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-065490","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-065490 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.51s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-065490 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-065490 --alsologtostderr -v=5: (1.035483564s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.34s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-065490 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-065490 --alsologtostderr -v=5: (2.33724206s)
--- PASS: TestPause/serial/DeletePaused (2.34s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.965466012s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-065490
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-065490: exit status 1 (16.180681ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-065490: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0422 17:49:45.109568    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m31.707182538s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (68.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m8.993299648s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (68.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-n2qv8" [098205d7-1019-4585-ac3f-5c17432ccfcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-n2qv8" [098205d7-1019-4585-ac3f-5c17432ccfcb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004117116s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (91.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m31.378490571s)
--- PASS: TestNetworkPlugins/group/calico/Start (91.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9szrb" [fd1f10ef-2b8b-40d3-b043-f0df5c643725] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003879667s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-86d6q" [19a53673-8707-44cd-a66a-d4e7e35791d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-86d6q" [19a53673-8707-44cd-a66a-d4e7e35791d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003580006s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m9.453198765s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gtpqs" [7e40729b-ea94-4009-b689-5027fe6d83e4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005994261s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gfn74" [c341fae7-ef10-485c-a25a-97a13c7fc0f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gfn74" [c341fae7-ef10-485c-a25a-97a13c7fc0f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004171863s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (57.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (57.238205875s)
--- PASS: TestNetworkPlugins/group/false/Start (57.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-m4pnh" [929026a9-ccaf-46cd-b33e-27419d205e8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-m4pnh" [929026a9-ccaf-46cd-b33e-27419d205e8c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.02215231s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0422 17:54:28.159889    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m28.481797632s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zp5ds" [ab07b8fb-778b-4bcf-aae0-de62dfcc8525] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 17:54:45.108936    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-zp5ds" [ab07b8fb-778b-4bcf-aae0-de62dfcc8525] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004498403s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0422 17:55:26.633241    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 17:55:51.300393    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.305747    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.316012    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.336252    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.376563    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.456933    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.617980    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:51.938127    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:52.578880    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:55:53.859465    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m5.168658596s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xq779" [0f8d7475-8424-4621-a4de-a766f7ccb345] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 17:55:56.420629    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-xq779" [0f8d7475-8424-4621-a4de-a766f7ccb345] Running
E0422 17:56:01.541794    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003165466s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pxfd8" [45e561eb-b24d-4673-83bc-27a400ba4d29] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006196704s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-snwqf" [527772a1-caa6-4fe5-8be0-b33322da5f87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-snwqf" [527772a1-caa6-4fe5-8be0-b33322da5f87] Running
E0422 17:56:32.263539    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003174973s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m33.477229846s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (93.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0422 17:57:09.989397    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 17:57:13.223710    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 17:57:30.469587    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 17:57:31.893962    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 17:57:56.999880    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.005152    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.015835    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.036077    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.076377    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.156826    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.317225    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:57.637930    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:58.278904    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:57:59.559589    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:58:02.120616    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-060426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m33.368740133s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (93.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zv6x9" [f7ff93e4-581d-4fe6-80fb-d5ecd30ec60c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zv6x9" [f7ff93e4-581d-4fe6-80fb-d5ecd30ec60c] Running
E0422 17:58:07.241077    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:58:11.430508    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003316403s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-060426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-986384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0422 17:58:35.144834    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-986384 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m42.872298709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-060426 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-060426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8d9b6" [3d5c4c6c-eb26-47ba-9492-f16a5a5c9896] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0422 17:58:37.963127    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-8d9b6" [3d5c4c6c-eb26-47ba-9492-f16a5a5c9896] Running
E0422 17:58:46.154643    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.159891    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.170539    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.190790    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.231025    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.312004    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.472416    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:46.793014    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:47.433466    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:58:48.714286    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.004239625s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-060426 exec deployment/netcat -- nslookup kubernetes.default
E0422 17:58:51.274881    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-060426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)
E0422 18:13:12.552457    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 18:13:37.114490    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:13:46.155118    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:14:00.596391    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:14:20.046199    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 18:14:41.155011    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:14:45.108946    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-472320 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 17:59:18.923508    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
E0422 17:59:27.116692    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 17:59:33.351461    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 17:59:41.155055    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.160692    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.170939    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.191213    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.231538    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.311963    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.472298    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:41.793280    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:42.434236    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:43.715395    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:45.108870    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 17:59:46.275885    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 17:59:51.396301    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:00:01.636547    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:00:08.077237    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:00:09.679181    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 18:00:22.117027    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:00:26.633198    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-472320 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (1m20.816006639s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-472320 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2348d810-4004-4645-ae6b-0ff79d7ac851] Pending
helpers_test.go:344: "busybox" [2348d810-4004-4645-ae6b-0ff79d7ac851] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2348d810-4004-4645-ae6b-0ff79d7ac851] Running
E0422 18:00:40.844603    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.00432946s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-472320 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-472320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-472320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037996438s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-472320 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-472320 --alsologtostderr -v=3
E0422 18:00:51.300878    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 18:00:55.499296    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.504635    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.514911    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.535263    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.575495    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.655781    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:55.816504    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:56.137519    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-472320 --alsologtostderr -v=3: (11.062160263s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-472320 -n embed-certs-472320
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-472320 -n embed-certs-472320: exit status 7 (92.159519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-472320 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-472320 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:00:56.777754    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:00:58.058896    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:01:00.619560    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:01:03.077508    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
E0422 18:01:05.740663    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:01:15.980878    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-472320 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (4m26.145188568s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-472320 -n embed-certs-472320
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-986384 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a1d879e-682e-41ea-802d-d46d0d925877] Pending
helpers_test.go:344: "busybox" [4a1d879e-682e-41ea-802d-d46d0d925877] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0422 18:01:18.985517    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 18:01:19.646195    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:19.651540    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:19.661791    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:19.682164    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:19.722517    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4a1d879e-682e-41ea-802d-d46d0d925877] Running
E0422 18:01:19.802959    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:19.963221    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:20.283626    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:20.923936    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:22.204825    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:24.765423    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004636028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-986384 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-986384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-986384 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.512781365s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-986384 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-986384 --alsologtostderr -v=3
E0422 18:01:29.885640    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:01:29.997944    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:01:36.461204    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-986384 --alsologtostderr -v=3: (11.15249806s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986384 -n old-k8s-version-986384
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-986384 -n old-k8s-version-986384: exit status 7 (91.899183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-986384 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7ts4x" [19ef8da2-4dd2-49d2-9e29-339ff6c0e767] Running
E0422 18:05:26.632649    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003057237s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7ts4x" [19ef8da2-4dd2-49d2-9e29-339ff6c0e767] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003239659s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-472320 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-472320 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-472320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-472320 -n embed-certs-472320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-472320 -n embed-certs-472320: exit status 2 (330.064838ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-472320 -n embed-certs-472320
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-472320 -n embed-certs-472320: exit status 2 (312.982146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-472320 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-472320 -n embed-certs-472320
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-472320 -n embed-certs-472320
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-256480 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:05:46.952752    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:05:51.300142    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 18:05:55.499302    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:06:19.647105    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:06:20.956697    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:06:23.182971    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-256480 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (55.904118593s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-256480 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [11284a9d-d1cd-4260-938a-e7e31700265f] Pending
helpers_test.go:344: "busybox" [11284a9d-d1cd-4260-938a-e7e31700265f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [11284a9d-d1cd-4260-938a-e7e31700265f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004095936s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-256480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-256480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0422 18:06:47.329831    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-256480 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-256480 --alsologtostderr -v=3
E0422 18:06:49.508095    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-256480 --alsologtostderr -v=3: (11.018551942s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-256480 -n no-preload-256480
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-256480 -n no-preload-256480: exit status 7 (80.472923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-256480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (265.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-256480 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:07:31.893994    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-256480 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (4m25.440093332s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-256480 -n no-preload-256480
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (265.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sl4t5" [7164d5e8-dc27-487b-b953-df96cc158140] Running
E0422 18:07:57.000123    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.02242118s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sl4t5" [7164d5e8-dc27-487b-b953-df96cc158140] Running
E0422 18:08:03.109965    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004169643s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-986384 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-986384 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-986384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986384 -n old-k8s-version-986384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986384 -n old-k8s-version-986384: exit status 2 (376.83529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986384 -n old-k8s-version-986384
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986384 -n old-k8s-version-986384: exit status 2 (328.726277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-986384 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-986384 -n old-k8s-version-986384
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-986384 -n old-k8s-version-986384
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-778414 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:08:30.793352    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/bridge-060426/client.crt: no such file or directory
E0422 18:08:37.113936    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
E0422 18:08:46.155217    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
E0422 18:09:04.797901    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kubenet-060426/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-778414 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (1m28.41537003s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-778414 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3eaf6102-edb3-4e8e-b65a-899d8a0e3d59] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0422 18:09:41.155663    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/false-060426/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3eaf6102-edb3-4e8e-b65a-899d8a0e3d59] Running
E0422 18:09:45.112246    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003943506s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-778414 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-778414 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-778414 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-778414 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-778414 --alsologtostderr -v=3: (10.896822271s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414: exit status 7 (93.125585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-778414 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-778414 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:10:26.633186    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/functional-892312/client.crt: no such file or directory
E0422 18:10:51.300243    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
E0422 18:10:55.499446    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/enable-default-cni-060426/client.crt: no such file or directory
E0422 18:11:08.160877    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/addons-613799/client.crt: no such file or directory
E0422 18:11:16.754937    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:16.760269    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:16.770546    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:16.790712    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:16.831087    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:16.911423    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:17.071781    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:17.392045    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:18.032517    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:19.313232    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:11:19.646875    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/flannel-060426/client.crt: no such file or directory
E0422 18:11:21.874048    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-778414 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (5m3.090215324s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-smcw9" [404df5f3-7593-4a1e-90c1-578e443efd11] Running
E0422 18:11:26.994781    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00423984s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-smcw9" [404df5f3-7593-4a1e-90c1-578e443efd11] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003942154s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-256480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-256480 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-256480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-256480 -n no-preload-256480
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-256480 -n no-preload-256480: exit status 2 (336.966623ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-256480 -n no-preload-256480
E0422 18:11:37.235415    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-256480 -n no-preload-256480: exit status 2 (328.533341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-256480 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-256480 -n no-preload-256480
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-256480 -n no-preload-256480
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-481071 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:11:49.507613    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/kindnet-060426/client.crt: no such file or directory
E0422 18:11:57.715717    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
E0422 18:12:14.346337    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/auto-060426/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-481071 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (47.122733873s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-481071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-481071 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.185776002s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-481071 --alsologtostderr -v=3
E0422 18:12:31.894103    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/skaffold-819699/client.crt: no such file or directory
E0422 18:12:38.676076    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/old-k8s-version-986384/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-481071 --alsologtostderr -v=3: (10.996629612s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-481071 -n newest-cni-481071
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-481071 -n newest-cni-481071: exit status 7 (90.058949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-481071 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-481071 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0
E0422 18:12:57.000528    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/calico-060426/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-481071 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.0: (17.623058774s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-481071 -n newest-cni-481071
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-481071 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-481071 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-481071 -n newest-cni-481071
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-481071 -n newest-cni-481071: exit status 2 (406.716612ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-481071 -n newest-cni-481071
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-481071 -n newest-cni-481071: exit status 2 (325.289707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-481071 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-481071 -n newest-cni-481071
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-481071 -n newest-cni-481071
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wc8tb" [0712c240-4387-4e08-a2df-d087356db189] Running
E0422 18:15:09.201926    7728 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/custom-flannel-060426/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003470702s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wc8tb" [0712c240-4387-4e08-a2df-d087356db189] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004060674s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-778414 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-778414 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-778414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414: exit status 2 (304.278727ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414: exit status 2 (305.145837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-778414 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-778414 -n default-k8s-diff-port-778414
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.76s)

                                                
                                    

Test skip (24/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-836605 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-836605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-836605
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-060426 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-060426" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18706-2371/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 22 Apr 2024 17:38:51 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-262638
contexts:
- context:
cluster: offline-docker-262638
extensions:
- extension:
last-update: Mon, 22 Apr 2024 17:38:51 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0
name: context_info
namespace: default
user: offline-docker-262638
name: offline-docker-262638
current-context: offline-docker-262638
kind: Config
preferences: {}
users:
- name: offline-docker-262638
user:
client-certificate: /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/offline-docker-262638/client.crt
client-key: /home/jenkins/minikube-integration/18706-2371/.minikube/profiles/offline-docker-262638/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-060426

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-060426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-060426"

                                                
                                                
----------------------- debugLogs end: cilium-060426 [took: 4.186060234s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-060426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-060426
--- SKIP: TestNetworkPlugins/group/cilium (4.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-242268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-242268
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
Copied to clipboard