Test Report: Docker_Linux_containerd_arm64 18277

                    
                      3b3cd74538400bfa9e43257fd64a7f0f3b029a2d:2024-03-16:33601
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (37.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-821353 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-821353 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-821353 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8c27ab17-43f5-4c23-89b0-066812e104e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8c27ab17-43f5-4c23-89b0-066812e104e4] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.005176359s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-821353 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.063228918s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-821353 addons disable ingress-dns --alsologtostderr -v=1: (1.382678846s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-821353 addons disable ingress --alsologtostderr -v=1: (7.810178256s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-821353
helpers_test.go:235: (dbg) docker inspect addons-821353:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce",
	        "Created": "2024-03-16T16:56:12.522309096Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286909,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-16T16:56:12.839575634Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce/hostname",
	        "HostsPath": "/var/lib/docker/containers/639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce/hosts",
	        "LogPath": "/var/lib/docker/containers/639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce/639e736ab5f657f86bef6e92399250c4b5e1a8428af97d4a4f261a993f46edce-json.log",
	        "Name": "/addons-821353",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-821353:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-821353",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a79d5105b62088ade0e0d3a3862b2102b0e4383579e2710d20cf2136c986ac03-init/diff:/var/lib/docker/overlay2/8d60f86c085005efdbad22ffe73f1ce0b89f9b32800c71896e407b2a86b69166/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a79d5105b62088ade0e0d3a3862b2102b0e4383579e2710d20cf2136c986ac03/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a79d5105b62088ade0e0d3a3862b2102b0e4383579e2710d20cf2136c986ac03/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a79d5105b62088ade0e0d3a3862b2102b0e4383579e2710d20cf2136c986ac03/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-821353",
	                "Source": "/var/lib/docker/volumes/addons-821353/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-821353",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-821353",
	                "name.minikube.sigs.k8s.io": "addons-821353",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9a4c7f4768bf03c5a1d57031ef2f5ad129109b3f86fd87588e1e1d921f85ec28",
	            "SandboxKey": "/var/run/docker/netns/9a4c7f4768bf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-821353": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "639e736ab5f6",
	                        "addons-821353"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "084e4b20435cf7c794c0bbd3f41145c3fb14fa39d06188adc8f92847ae6fcef6",
	                    "EndpointID": "f8e1cad42e1719c0f97613334a9081f0f5fdba247cb7eca4f0a9a0700a365ca6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-821353",
	                        "639e736ab5f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-821353 -n addons-821353
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-821353 logs -n 25: (1.469450365s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-534627              | download-only-534627   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only              | download-only-892980   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-892980              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-892980              | download-only-892980   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-847118              | download-only-847118   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-534627              | download-only-534627   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-892980              | download-only-892980   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | --download-only -p                   | download-docker-723066 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | download-docker-723066               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-723066            | download-docker-723066 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | --download-only -p                   | binary-mirror-850856   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | binary-mirror-850856                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39169               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-850856              | binary-mirror-850856   | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| addons  | disable dashboard -p                 | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | addons-821353                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | addons-821353                        |                        |         |         |                     |                     |
	| start   | -p addons-821353 --wait=true         | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:57 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-821353 ip                     | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	| addons  | addons-821353 addons disable         | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821353 addons                 | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | addons-821353                        |                        |         |         |                     |                     |
	| ssh     | addons-821353 ssh curl -s            | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-821353 ip                     | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	| addons  | addons-821353 addons                 | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-821353 addons disable         | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-821353 addons disable         | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-821353 addons                 | addons-821353          | jenkins | v1.32.0 | 16 Mar 24 16:58 UTC | 16 Mar 24 16:58 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:48.783737  286447 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:48.783874  286447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:48.783884  286447 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:48.783889  286447 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:48.784154  286447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 16:55:48.784626  286447 out.go:298] Setting JSON to false
	I0316 16:55:48.785460  286447 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9495,"bootTime":1710598654,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 16:55:48.785532  286447 start.go:139] virtualization:  
	I0316 16:55:48.788041  286447 out.go:177] * [addons-821353] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 16:55:48.790305  286447 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 16:55:48.792369  286447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:48.790394  286447 notify.go:220] Checking for updates...
	I0316 16:55:48.796532  286447 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 16:55:48.798536  286447 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 16:55:48.800782  286447 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 16:55:48.802605  286447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 16:55:48.804661  286447 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:48.824963  286447 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 16:55:48.825089  286447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:48.894513  286447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-16 16:55:48.884492384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:48.894629  286447 docker.go:295] overlay module found
	I0316 16:55:48.897817  286447 out.go:177] * Using the docker driver based on user configuration
	I0316 16:55:48.899415  286447 start.go:297] selected driver: docker
	I0316 16:55:48.899437  286447 start.go:901] validating driver "docker" against <nil>
	I0316 16:55:48.899452  286447 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 16:55:48.900163  286447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:48.962245  286447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-16 16:55:48.953588668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:48.962418  286447 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:48.962645  286447 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 16:55:48.964735  286447 out.go:177] * Using Docker driver with root privileges
	I0316 16:55:48.966708  286447 cni.go:84] Creating CNI manager for ""
	I0316 16:55:48.966734  286447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:55:48.966749  286447 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:48.966835  286447 start.go:340] cluster config:
	{Name:addons-821353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-821353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:55:48.969126  286447 out.go:177] * Starting "addons-821353" primary control-plane node in "addons-821353" cluster
	I0316 16:55:48.971104  286447 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 16:55:48.973238  286447 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0316 16:55:48.975248  286447 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:48.975303  286447 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:48.975316  286447 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:48.975317  286447 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 16:55:48.975408  286447 preload.go:173] Found /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0316 16:55:48.975419  286447 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0316 16:55:48.975794  286447 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/config.json ...
	I0316 16:55:48.975869  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/config.json: {Name:mkd905e3bd9e01dd72c34b6b1bd37af1c716b9a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:55:48.990402  286447 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0316 16:55:48.990575  286447 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0316 16:55:48.990598  286447 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0316 16:55:48.990603  286447 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0316 16:55:48.990611  286447 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0316 16:55:48.990616  286447 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from local cache
	I0316 16:56:05.290808  286447 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from cached tarball
	I0316 16:56:05.290858  286447 cache.go:194] Successfully downloaded all kic artifacts
	I0316 16:56:05.290916  286447 start.go:360] acquireMachinesLock for addons-821353: {Name:mk7d1a0ed6732ed9c39a16833d0f488d8aed5e2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:56:05.291073  286447 start.go:364] duration metric: took 132.093µs to acquireMachinesLock for "addons-821353"
	I0316 16:56:05.291115  286447 start.go:93] Provisioning new machine with config: &{Name:addons-821353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-821353 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 16:56:05.291225  286447 start.go:125] createHost starting for "" (driver="docker")
	I0316 16:56:05.293701  286447 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0316 16:56:05.293955  286447 start.go:159] libmachine.API.Create for "addons-821353" (driver="docker")
	I0316 16:56:05.293988  286447 client.go:168] LocalClient.Create starting
	I0316 16:56:05.294112  286447 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem
	I0316 16:56:05.557367  286447 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem
	I0316 16:56:06.047236  286447 cli_runner.go:164] Run: docker network inspect addons-821353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0316 16:56:06.064981  286447 cli_runner.go:211] docker network inspect addons-821353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0316 16:56:06.065080  286447 network_create.go:281] running [docker network inspect addons-821353] to gather additional debugging logs...
	I0316 16:56:06.065102  286447 cli_runner.go:164] Run: docker network inspect addons-821353
	W0316 16:56:06.080760  286447 cli_runner.go:211] docker network inspect addons-821353 returned with exit code 1
	I0316 16:56:06.080797  286447 network_create.go:284] error running [docker network inspect addons-821353]: docker network inspect addons-821353: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-821353 not found
	I0316 16:56:06.080811  286447 network_create.go:286] output of [docker network inspect addons-821353]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-821353 not found
	
	** /stderr **
	I0316 16:56:06.080916  286447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0316 16:56:06.097180  286447 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40029ac6c0}
	I0316 16:56:06.097228  286447 network_create.go:124] attempt to create docker network addons-821353 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0316 16:56:06.097286  286447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-821353 addons-821353
	I0316 16:56:06.162215  286447 network_create.go:108] docker network addons-821353 192.168.49.0/24 created
	I0316 16:56:06.162248  286447 kic.go:121] calculated static IP "192.168.49.2" for the "addons-821353" container
	I0316 16:56:06.162317  286447 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0316 16:56:06.176969  286447 cli_runner.go:164] Run: docker volume create addons-821353 --label name.minikube.sigs.k8s.io=addons-821353 --label created_by.minikube.sigs.k8s.io=true
	I0316 16:56:06.192771  286447 oci.go:103] Successfully created a docker volume addons-821353
	I0316 16:56:06.192861  286447 cli_runner.go:164] Run: docker run --rm --name addons-821353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821353 --entrypoint /usr/bin/test -v addons-821353:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0316 16:56:08.195450  286447 cli_runner.go:217] Completed: docker run --rm --name addons-821353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821353 --entrypoint /usr/bin/test -v addons-821353:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib: (2.002537063s)
	I0316 16:56:08.195482  286447 oci.go:107] Successfully prepared a docker volume addons-821353
	I0316 16:56:08.195505  286447 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:56:08.195524  286447 kic.go:194] Starting extracting preloaded images to volume ...
	I0316 16:56:08.195625  286447 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-821353:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0316 16:56:12.454557  286447 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-821353:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (4.258884996s)
	I0316 16:56:12.454592  286447 kic.go:203] duration metric: took 4.259064089s to extract preloaded images to volume ...
	W0316 16:56:12.454780  286447 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0316 16:56:12.454916  286447 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0316 16:56:12.508630  286447 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-821353 --name addons-821353 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-821353 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-821353 --network addons-821353 --ip 192.168.49.2 --volume addons-821353:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0316 16:56:12.850095  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Running}}
	I0316 16:56:12.868142  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:12.889001  286447 cli_runner.go:164] Run: docker exec addons-821353 stat /var/lib/dpkg/alternatives/iptables
	I0316 16:56:12.954983  286447 oci.go:144] the created container "addons-821353" has a running status.
	I0316 16:56:12.955014  286447 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa...
	I0316 16:56:13.647888  286447 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0316 16:56:13.678070  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:13.702368  286447 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0316 16:56:13.702387  286447 kic_runner.go:114] Args: [docker exec --privileged addons-821353 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0316 16:56:13.771609  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:13.793794  286447 machine.go:94] provisionDockerMachine start ...
	I0316 16:56:13.793890  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:13.811893  286447 main.go:141] libmachine: Using SSH client type: native
	I0316 16:56:13.812168  286447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0316 16:56:13.812177  286447 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 16:56:13.962941  286447 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821353
	
	I0316 16:56:13.962969  286447 ubuntu.go:169] provisioning hostname "addons-821353"
	I0316 16:56:13.963041  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:13.979080  286447 main.go:141] libmachine: Using SSH client type: native
	I0316 16:56:13.979336  286447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0316 16:56:13.979354  286447 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-821353 && echo "addons-821353" | sudo tee /etc/hostname
	I0316 16:56:14.143693  286447 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-821353
	
	I0316 16:56:14.143847  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:14.159930  286447 main.go:141] libmachine: Using SSH client type: native
	I0316 16:56:14.160185  286447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33145 <nil> <nil>}
	I0316 16:56:14.160208  286447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-821353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-821353/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-821353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 16:56:14.299659  286447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 16:56:14.299686  286447 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18277-280225/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-280225/.minikube}
	I0316 16:56:14.299731  286447 ubuntu.go:177] setting up certificates
	I0316 16:56:14.299742  286447 provision.go:84] configureAuth start
	I0316 16:56:14.299824  286447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821353
	I0316 16:56:14.315678  286447 provision.go:143] copyHostCerts
	I0316 16:56:14.315761  286447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem (1078 bytes)
	I0316 16:56:14.315890  286447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem (1123 bytes)
	I0316 16:56:14.315960  286447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem (1675 bytes)
	I0316 16:56:14.316016  286447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem org=jenkins.addons-821353 san=[127.0.0.1 192.168.49.2 addons-821353 localhost minikube]
	I0316 16:56:15.051390  286447 provision.go:177] copyRemoteCerts
	I0316 16:56:15.051472  286447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 16:56:15.051524  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:15.070712  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:15.173759  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0316 16:56:15.200256  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0316 16:56:15.226261  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 16:56:15.251054  286447 provision.go:87] duration metric: took 951.29261ms to configureAuth
	I0316 16:56:15.251128  286447 ubuntu.go:193] setting minikube options for container-runtime
	I0316 16:56:15.251368  286447 config.go:182] Loaded profile config "addons-821353": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 16:56:15.251385  286447 machine.go:97] duration metric: took 1.457566805s to provisionDockerMachine
	I0316 16:56:15.251394  286447 client.go:171] duration metric: took 9.957395173s to LocalClient.Create
	I0316 16:56:15.251430  286447 start.go:167] duration metric: took 9.957475681s to libmachine.API.Create "addons-821353"
	I0316 16:56:15.251446  286447 start.go:293] postStartSetup for "addons-821353" (driver="docker")
	I0316 16:56:15.251456  286447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 16:56:15.251525  286447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 16:56:15.251589  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:15.267483  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:15.368863  286447 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 16:56:15.372319  286447 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0316 16:56:15.372360  286447 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0316 16:56:15.372373  286447 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0316 16:56:15.372381  286447 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0316 16:56:15.372392  286447 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/addons for local assets ...
	I0316 16:56:15.372461  286447 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/files for local assets ...
	I0316 16:56:15.372488  286447 start.go:296] duration metric: took 121.036066ms for postStartSetup
	I0316 16:56:15.372809  286447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821353
	I0316 16:56:15.388087  286447 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/config.json ...
	I0316 16:56:15.388377  286447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 16:56:15.388431  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:15.403695  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:15.496335  286447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0316 16:56:15.500538  286447 start.go:128] duration metric: took 10.209293419s to createHost
	I0316 16:56:15.500562  286447 start.go:83] releasing machines lock for "addons-821353", held for 10.209476581s
	I0316 16:56:15.500632  286447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-821353
	I0316 16:56:15.526878  286447 ssh_runner.go:195] Run: cat /version.json
	I0316 16:56:15.526938  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:15.526940  286447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 16:56:15.526993  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:15.545630  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:15.556691  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:15.642884  286447 ssh_runner.go:195] Run: systemctl --version
	I0316 16:56:15.758573  286447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0316 16:56:15.762906  286447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0316 16:56:15.788489  286447 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0316 16:56:15.788593  286447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 16:56:15.816383  286447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0316 16:56:15.816408  286447 start.go:494] detecting cgroup driver to use...
	I0316 16:56:15.816441  286447 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0316 16:56:15.816496  286447 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0316 16:56:15.828850  286447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0316 16:56:15.840459  286447 docker.go:217] disabling cri-docker service (if available) ...
	I0316 16:56:15.840522  286447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 16:56:15.854405  286447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 16:56:15.868898  286447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 16:56:15.955276  286447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 16:56:16.053360  286447 docker.go:233] disabling docker service ...
	I0316 16:56:16.053511  286447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 16:56:16.073988  286447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 16:56:16.086499  286447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 16:56:16.175541  286447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 16:56:16.264764  286447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 16:56:16.276264  286447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 16:56:16.293869  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0316 16:56:16.304034  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0316 16:56:16.314153  286447 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0316 16:56:16.314262  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0316 16:56:16.324726  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 16:56:16.335343  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0316 16:56:16.345324  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 16:56:16.354560  286447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 16:56:16.363549  286447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0316 16:56:16.373774  286447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 16:56:16.382309  286447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 16:56:16.390662  286447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 16:56:16.477262  286447 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 16:56:16.610555  286447 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0316 16:56:16.610667  286447 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 16:56:16.614294  286447 start.go:562] Will wait 60s for crictl version
	I0316 16:56:16.614383  286447 ssh_runner.go:195] Run: which crictl
	I0316 16:56:16.617686  286447 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 16:56:16.654476  286447 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0316 16:56:16.654598  286447 ssh_runner.go:195] Run: containerd --version
	I0316 16:56:16.676107  286447 ssh_runner.go:195] Run: containerd --version
	I0316 16:56:16.705471  286447 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0316 16:56:16.707518  286447 cli_runner.go:164] Run: docker network inspect addons-821353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0316 16:56:16.722341  286447 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0316 16:56:16.725838  286447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 16:56:16.736511  286447 kubeadm.go:877] updating cluster {Name:addons-821353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-821353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 16:56:16.736633  286447 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:56:16.736692  286447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 16:56:16.772884  286447 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 16:56:16.772908  286447 containerd.go:519] Images already preloaded, skipping extraction
	I0316 16:56:16.772967  286447 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 16:56:16.810202  286447 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 16:56:16.810225  286447 cache_images.go:84] Images are preloaded, skipping loading
	I0316 16:56:16.810233  286447 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0316 16:56:16.810330  286447 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-821353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-821353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 16:56:16.810405  286447 ssh_runner.go:195] Run: sudo crictl info
	I0316 16:56:16.846979  286447 cni.go:84] Creating CNI manager for ""
	I0316 16:56:16.847001  286447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:56:16.847011  286447 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 16:56:16.847065  286447 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-821353 NodeName:addons-821353 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 16:56:16.847251  286447 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-821353"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 16:56:16.847334  286447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 16:56:16.856060  286447 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 16:56:16.856145  286447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 16:56:16.864901  286447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 16:56:16.882919  286447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 16:56:16.901265  286447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0316 16:56:16.919124  286447 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0316 16:56:16.922735  286447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 16:56:16.933656  286447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 16:56:17.026990  286447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 16:56:17.041218  286447 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353 for IP: 192.168.49.2
	I0316 16:56:17.041244  286447 certs.go:194] generating shared ca certs ...
	I0316 16:56:17.041262  286447 certs.go:226] acquiring lock for ca certs: {Name:mk6d455ecce74ad164a5c9d511b938033d09479f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.041404  286447 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key
	I0316 16:56:17.166407  286447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt ...
	I0316 16:56:17.166437  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt: {Name:mk9085400d5fef7441cb41ac1a2fbdafcf207f9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.167152  286447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key ...
	I0316 16:56:17.167179  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key: {Name:mk003cc7b6c0ffe7e297678f4c86784aab50b9a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.167284  286447 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key
	I0316 16:56:17.351263  286447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.crt ...
	I0316 16:56:17.351301  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.crt: {Name:mk71bb3dc32d51103f032c61b1529fcb09a735af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.352147  286447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key ...
	I0316 16:56:17.352165  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key: {Name:mk8a780fe8581cfe346641e128bfcf7b4f1cbe9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.352263  286447 certs.go:256] generating profile certs ...
	I0316 16:56:17.352327  286447 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.key
	I0316 16:56:17.352343  286447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt with IP's: []
	I0316 16:56:17.684608  286447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt ...
	I0316 16:56:17.684642  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: {Name:mkbc6b9616892773559ea4af211086baa6d1cf72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.684865  286447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.key ...
	I0316 16:56:17.684880  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.key: {Name:mk8fada7ab5dbc696286be58b9fac731965b20cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:17.684982  286447 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key.b52e69fc
	I0316 16:56:17.685005  286447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt.b52e69fc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0316 16:56:18.171116  286447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt.b52e69fc ...
	I0316 16:56:18.171151  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt.b52e69fc: {Name:mk94a86c28c4724f544defd179cf4d76d842ee10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:18.171351  286447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key.b52e69fc ...
	I0316 16:56:18.171366  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key.b52e69fc: {Name:mke074706953abfde8f89cb4158679c46a48e00e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:18.172047  286447 certs.go:381] copying /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt.b52e69fc -> /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt
	I0316 16:56:18.172185  286447 certs.go:385] copying /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key.b52e69fc -> /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key
	I0316 16:56:18.172242  286447 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.key
	I0316 16:56:18.172266  286447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.crt with IP's: []
	I0316 16:56:18.712618  286447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.crt ...
	I0316 16:56:18.712651  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.crt: {Name:mkde3879e10c0489bc4517fda4cd70d7f1221fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:18.713816  286447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.key ...
	I0316 16:56:18.713833  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.key: {Name:mk4d0cebf0a84fd0fa218b7489f6e97e3a6adba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:18.714038  286447 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem (1679 bytes)
	I0316 16:56:18.714080  286447 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem (1078 bytes)
	I0316 16:56:18.714108  286447 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem (1123 bytes)
	I0316 16:56:18.714138  286447 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem (1675 bytes)
	I0316 16:56:18.714740  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 16:56:18.742967  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0316 16:56:18.769928  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 16:56:18.795139  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 16:56:18.820031  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0316 16:56:18.842829  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 16:56:18.866460  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 16:56:18.890517  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 16:56:18.914144  286447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 16:56:18.937820  286447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 16:56:18.956451  286447 ssh_runner.go:195] Run: openssl version
	I0316 16:56:18.961730  286447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 16:56:18.970617  286447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 16:56:18.973838  286447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0316 16:56:18.973935  286447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 16:56:18.980926  286447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 16:56:18.990483  286447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 16:56:18.993693  286447 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 16:56:18.993741  286447 kubeadm.go:391] StartCluster: {Name:addons-821353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-821353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:56:18.993829  286447 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0316 16:56:18.993893  286447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 16:56:19.031630  286447 cri.go:89] found id: ""
	I0316 16:56:19.031702  286447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 16:56:19.040553  286447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 16:56:19.049451  286447 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0316 16:56:19.049523  286447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 16:56:19.058451  286447 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 16:56:19.058471  286447 kubeadm.go:156] found existing configuration files:
	
	I0316 16:56:19.058522  286447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 16:56:19.067383  286447 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 16:56:19.067451  286447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 16:56:19.076211  286447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 16:56:19.085636  286447 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 16:56:19.085706  286447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 16:56:19.094094  286447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 16:56:19.102535  286447 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 16:56:19.102624  286447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 16:56:19.110756  286447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 16:56:19.119148  286447 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 16:56:19.119214  286447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 16:56:19.127809  286447 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0316 16:56:19.180111  286447 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 16:56:19.180257  286447 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 16:56:19.221511  286447 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0316 16:56:19.221586  286447 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0316 16:56:19.221628  286447 kubeadm.go:309] OS: Linux
	I0316 16:56:19.221677  286447 kubeadm.go:309] CGROUPS_CPU: enabled
	I0316 16:56:19.221727  286447 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0316 16:56:19.221776  286447 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0316 16:56:19.221827  286447 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0316 16:56:19.221877  286447 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0316 16:56:19.221929  286447 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0316 16:56:19.221975  286447 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0316 16:56:19.222025  286447 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0316 16:56:19.222071  286447 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0316 16:56:19.305227  286447 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 16:56:19.305337  286447 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 16:56:19.305436  286447 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 16:56:19.547958  286447 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 16:56:19.550576  286447 out.go:204]   - Generating certificates and keys ...
	I0316 16:56:19.550684  286447 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 16:56:19.550768  286447 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 16:56:19.731926  286447 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 16:56:19.896624  286447 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 16:56:20.171292  286447 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 16:56:20.324893  286447 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 16:56:21.054939  286447 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 16:56:21.055283  286447 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-821353 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0316 16:56:21.300624  286447 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 16:56:21.300972  286447 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-821353 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0316 16:56:22.181386  286447 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 16:56:22.588753  286447 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 16:56:23.780521  286447 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 16:56:23.781034  286447 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 16:56:23.934044  286447 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 16:56:24.304282  286447 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 16:56:25.336402  286447 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 16:56:26.056542  286447 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 16:56:26.057556  286447 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 16:56:26.060997  286447 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 16:56:26.063706  286447 out.go:204]   - Booting up control plane ...
	I0316 16:56:26.063823  286447 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 16:56:26.064098  286447 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 16:56:26.069072  286447 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 16:56:26.085236  286447 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 16:56:26.086090  286447 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 16:56:26.086320  286447 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 16:56:26.174118  286447 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 16:56:33.677941  286447 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.504209 seconds
	I0316 16:56:33.678776  286447 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 16:56:33.694230  286447 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 16:56:34.218314  286447 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 16:56:34.218514  286447 kubeadm.go:309] [mark-control-plane] Marking the node addons-821353 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 16:56:34.732042  286447 kubeadm.go:309] [bootstrap-token] Using token: 51a2hi.wr7ydgqhksszxs6n
	I0316 16:56:34.734196  286447 out.go:204]   - Configuring RBAC rules ...
	I0316 16:56:34.734313  286447 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 16:56:34.740988  286447 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 16:56:34.750331  286447 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 16:56:34.755106  286447 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 16:56:34.758906  286447 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 16:56:34.762864  286447 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 16:56:34.776837  286447 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 16:56:35.048165  286447 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 16:56:35.148661  286447 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 16:56:35.148687  286447 kubeadm.go:309] 
	I0316 16:56:35.148747  286447 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 16:56:35.148759  286447 kubeadm.go:309] 
	I0316 16:56:35.148834  286447 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 16:56:35.148844  286447 kubeadm.go:309] 
	I0316 16:56:35.148869  286447 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 16:56:35.148928  286447 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 16:56:35.148981  286447 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 16:56:35.148990  286447 kubeadm.go:309] 
	I0316 16:56:35.149042  286447 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 16:56:35.149050  286447 kubeadm.go:309] 
	I0316 16:56:35.149097  286447 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 16:56:35.149106  286447 kubeadm.go:309] 
	I0316 16:56:35.149156  286447 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 16:56:35.149240  286447 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 16:56:35.149317  286447 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 16:56:35.149328  286447 kubeadm.go:309] 
	I0316 16:56:35.149409  286447 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 16:56:35.149485  286447 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 16:56:35.149497  286447 kubeadm.go:309] 
	I0316 16:56:35.149578  286447 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 51a2hi.wr7ydgqhksszxs6n \
	I0316 16:56:35.149681  286447 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c6354abf1d1919267e82e90c1fdf768e2f30fa2f6f3fed64a34f2365731d78b8 \
	I0316 16:56:35.149703  286447 kubeadm.go:309] 	--control-plane 
	I0316 16:56:35.149712  286447 kubeadm.go:309] 
	I0316 16:56:35.149794  286447 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 16:56:35.149804  286447 kubeadm.go:309] 
	I0316 16:56:35.149886  286447 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 51a2hi.wr7ydgqhksszxs6n \
	I0316 16:56:35.149988  286447 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c6354abf1d1919267e82e90c1fdf768e2f30fa2f6f3fed64a34f2365731d78b8 
	I0316 16:56:35.154495  286447 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0316 16:56:35.154614  286447 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 16:56:35.154634  286447 cni.go:84] Creating CNI manager for ""
	I0316 16:56:35.154642  286447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:56:35.157267  286447 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0316 16:56:35.159410  286447 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0316 16:56:35.164703  286447 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0316 16:56:35.164723  286447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0316 16:56:35.200369  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0316 16:56:36.125260  286447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 16:56:36.125408  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-821353 minikube.k8s.io/updated_at=2024_03_16T16_56_36_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=dcb7bcec19ba52ac09364e1139fb2071215a1bc6 minikube.k8s.io/name=addons-821353 minikube.k8s.io/primary=true
	I0316 16:56:36.125449  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:36.276568  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:36.276627  286447 ops.go:34] apiserver oom_adj: -16
	I0316 16:56:36.777574  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:37.276718  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:37.777256  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:38.276715  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:38.776672  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:39.277636  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:39.776973  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:40.276703  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:40.777613  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:41.277026  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:41.777038  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:42.276731  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:42.777448  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:43.276701  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:43.777293  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:44.277484  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:44.776746  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:45.277678  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:45.777462  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:46.277273  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:46.776994  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:47.277428  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:47.776867  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:48.276873  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:48.776786  286447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 16:56:48.959731  286447 kubeadm.go:1107] duration metric: took 12.834366812s to wait for elevateKubeSystemPrivileges
	W0316 16:56:48.959767  286447 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 16:56:48.959775  286447 kubeadm.go:393] duration metric: took 29.966038863s to StartCluster
	I0316 16:56:48.959791  286447 settings.go:142] acquiring lock: {Name:mkcd5f7504890e5ae44ee0b7a2caa6ef5c6c8fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:48.959917  286447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 16:56:48.960292  286447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/kubeconfig: {Name:mk8864b14e2dcaa49893fcecc40453b6fe139389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:56:48.961072  286447 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 16:56:48.963349  286447 out.go:177] * Verifying Kubernetes components...
	I0316 16:56:48.961185  286447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0316 16:56:48.961347  286447 config.go:182] Loaded profile config "addons-821353": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 16:56:48.961355  286447 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0316 16:56:48.965217  286447 addons.go:69] Setting yakd=true in profile "addons-821353"
	I0316 16:56:48.965254  286447 addons.go:234] Setting addon yakd=true in "addons-821353"
	I0316 16:56:48.965288  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.965789  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.965926  286447 addons.go:69] Setting ingress=true in profile "addons-821353"
	I0316 16:56:48.965950  286447 addons.go:234] Setting addon ingress=true in "addons-821353"
	I0316 16:56:48.965984  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.966340  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.966845  286447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 16:56:48.967052  286447 addons.go:69] Setting ingress-dns=true in profile "addons-821353"
	I0316 16:56:48.967080  286447 addons.go:234] Setting addon ingress-dns=true in "addons-821353"
	I0316 16:56:48.967108  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.967490  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.970023  286447 addons.go:69] Setting cloud-spanner=true in profile "addons-821353"
	I0316 16:56:48.970105  286447 addons.go:234] Setting addon cloud-spanner=true in "addons-821353"
	I0316 16:56:48.970165  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.970329  286447 addons.go:69] Setting inspektor-gadget=true in profile "addons-821353"
	I0316 16:56:48.970361  286447 addons.go:234] Setting addon inspektor-gadget=true in "addons-821353"
	I0316 16:56:48.970395  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.970662  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.970760  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.982086  286447 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-821353"
	I0316 16:56:48.982174  286447 addons.go:69] Setting metrics-server=true in profile "addons-821353"
	I0316 16:56:48.982585  286447 addons.go:234] Setting addon metrics-server=true in "addons-821353"
	I0316 16:56:48.982742  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:48.985018  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.982204  286447 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-821353"
	I0316 16:56:49.017899  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.018427  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.982212  286447 addons.go:69] Setting default-storageclass=true in profile "addons-821353"
	I0316 16:56:49.019858  286447 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-821353"
	I0316 16:56:49.020175  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:48.982238  286447 addons.go:69] Setting gcp-auth=true in profile "addons-821353"
	I0316 16:56:49.017549  286447 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-821353"
	I0316 16:56:49.037393  286447 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-821353"
	I0316 16:56:49.037439  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.017565  286447 addons.go:69] Setting registry=true in profile "addons-821353"
	I0316 16:56:49.047140  286447 addons.go:234] Setting addon registry=true in "addons-821353"
	I0316 16:56:49.017572  286447 addons.go:69] Setting storage-provisioner=true in profile "addons-821353"
	I0316 16:56:49.047218  286447 addons.go:234] Setting addon storage-provisioner=true in "addons-821353"
	I0316 16:56:49.047252  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.017577  286447 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-821353"
	I0316 16:56:49.047652  286447 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-821353"
	I0316 16:56:49.055079  286447 mustload.go:65] Loading cluster: addons-821353
	I0316 16:56:49.055304  286447 config.go:182] Loaded profile config "addons-821353": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 16:56:49.055559  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.017581  286447 addons.go:69] Setting volumesnapshots=true in profile "addons-821353"
	I0316 16:56:49.056328  286447 addons.go:234] Setting addon volumesnapshots=true in "addons-821353"
	I0316 16:56:49.056441  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.056881  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.068252  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.068520  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.108389  286447 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0316 16:56:49.113426  286447 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0316 16:56:49.118115  286447 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0316 16:56:49.127318  286447 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0316 16:56:49.127388  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0316 16:56:49.127479  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.126035  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.126073  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.160030  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.183731  286447 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0316 16:56:49.185948  286447 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 16:56:49.185969  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 16:56:49.186035  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.195821  286447 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0316 16:56:49.220154  286447 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0316 16:56:49.228469  286447 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0316 16:56:49.220306  286447 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0316 16:56:49.220364  286447 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0316 16:56:49.220371  286447 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0316 16:56:49.228716  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0316 16:56:49.232736  286447 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0316 16:56:49.232820  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0316 16:56:49.232920  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.228837  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0316 16:56:49.236721  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.257192  286447 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0316 16:56:49.257215  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0316 16:56:49.257267  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.229111  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.257094  286447 addons.go:234] Setting addon default-storageclass=true in "addons-821353"
	I0316 16:56:49.346573  286447 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0316 16:56:49.354492  286447 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0316 16:56:49.354564  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0316 16:56:49.354659  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.375996  286447 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 16:56:49.347731  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.350526  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.347690  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0316 16:56:49.376135  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.378868  286447 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 16:56:49.378885  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 16:56:49.379854  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.398015  286447 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-821353"
	I0316 16:56:49.398057  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:49.398758  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.409650  286447 out.go:177]   - Using image docker.io/registry:2.8.3
	I0316 16:56:49.415754  286447 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0316 16:56:49.411720  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:49.415309  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.418115  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0316 16:56:49.418219  286447 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0316 16:56:49.420060  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0316 16:56:49.420130  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.443127  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.458798  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0316 16:56:49.471761  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0316 16:56:49.471834  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0316 16:56:49.471930  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.480617  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0316 16:56:49.486212  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0316 16:56:49.488910  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0316 16:56:49.497497  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0316 16:56:49.495005  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.506562  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0316 16:56:49.510571  286447 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0316 16:56:49.511995  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.513583  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0316 16:56:49.513673  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0316 16:56:49.513736  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.543448  286447 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0316 16:56:49.545693  286447 out.go:177]   - Using image docker.io/busybox:stable
	I0316 16:56:49.548194  286447 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0316 16:56:49.548225  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0316 16:56:49.548304  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.550192  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.586029  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.605506  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.639421  286447 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 16:56:49.639442  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 16:56:49.639501  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:49.652983  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.672269  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.689938  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.692265  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:49.716088  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:50.033912  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0316 16:56:50.071946  286447 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.101997197s)
	I0316 16:56:50.072155  286447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0316 16:56:50.072268  286447 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.105401641s)
	I0316 16:56:50.072359  286447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 16:56:50.106571  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0316 16:56:50.126185  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0316 16:56:50.237234  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0316 16:56:50.243119  286447 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0316 16:56:50.243192  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0316 16:56:50.248974  286447 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 16:56:50.249055  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0316 16:56:50.259294  286447 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0316 16:56:50.259368  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0316 16:56:50.299177  286447 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0316 16:56:50.299248  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0316 16:56:50.396071  286447 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0316 16:56:50.396141  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0316 16:56:50.457170  286447 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 16:56:50.457193  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 16:56:50.462026  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0316 16:56:50.462047  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0316 16:56:50.470978  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 16:56:50.472366  286447 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0316 16:56:50.472424  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0316 16:56:50.483247  286447 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0316 16:56:50.483320  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0316 16:56:50.504115  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 16:56:50.521985  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0316 16:56:50.533218  286447 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0316 16:56:50.533292  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0316 16:56:50.565397  286447 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0316 16:56:50.565469  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0316 16:56:50.670538  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0316 16:56:50.678864  286447 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 16:56:50.678937  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 16:56:50.682626  286447 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0316 16:56:50.682699  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0316 16:56:50.686753  286447 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0316 16:56:50.686828  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0316 16:56:50.704078  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0316 16:56:50.704153  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0316 16:56:50.866833  286447 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0316 16:56:50.866906  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0316 16:56:50.877483  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0316 16:56:50.877556  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0316 16:56:51.006075  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0316 16:56:51.006171  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0316 16:56:51.035963  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 16:56:51.081358  286447 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0316 16:56:51.081382  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0316 16:56:51.111375  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0316 16:56:51.111451  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0316 16:56:51.175871  286447 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0316 16:56:51.175956  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0316 16:56:51.350018  286447 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0316 16:56:51.350087  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0316 16:56:51.364798  286447 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0316 16:56:51.364859  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0316 16:56:51.451093  286447 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0316 16:56:51.451165  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0316 16:56:51.576208  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0316 16:56:51.590703  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0316 16:56:51.616032  286447 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0316 16:56:51.616094  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0316 16:56:51.770939  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0316 16:56:51.771000  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0316 16:56:51.837303  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0316 16:56:51.837375  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0316 16:56:51.964615  286447 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0316 16:56:51.964687  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0316 16:56:52.086831  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0316 16:56:52.086903  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0316 16:56:52.192340  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0316 16:56:52.308353  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0316 16:56:52.308430  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0316 16:56:52.513588  286447 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0316 16:56:52.513664  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0316 16:56:52.801408  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0316 16:56:53.335673  286447 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.263469535s)
	I0316 16:56:53.335704  286447 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0316 16:56:53.336774  286447 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.263201434s)
	I0316 16:56:53.336942  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.302952858s)
	I0316 16:56:53.337853  286447 node_ready.go:35] waiting up to 6m0s for node "addons-821353" to be "Ready" ...
	I0316 16:56:53.344977  286447 node_ready.go:49] node "addons-821353" has status "Ready":"True"
	I0316 16:56:53.344998  286447 node_ready.go:38] duration metric: took 7.088983ms for node "addons-821353" to be "Ready" ...
	I0316 16:56:53.345009  286447 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 16:56:53.354895  286447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace to be "Ready" ...
	I0316 16:56:53.840226  286447 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-821353" context rescaled to 1 replicas
	I0316 16:56:55.364458  286447 pod_ready.go:102] pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace has status "Ready":"False"
	I0316 16:56:56.228439  286447 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0316 16:56:56.228564  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:56.250012  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:56.581052  286447 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0316 16:56:56.672004  286447 addons.go:234] Setting addon gcp-auth=true in "addons-821353"
	I0316 16:56:56.672057  286447 host.go:66] Checking if "addons-821353" exists ...
	I0316 16:56:56.672485  286447 cli_runner.go:164] Run: docker container inspect addons-821353 --format={{.State.Status}}
	I0316 16:56:56.696727  286447 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0316 16:56:56.696789  286447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-821353
	I0316 16:56:56.720629  286447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33145 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/addons-821353/id_rsa Username:docker}
	I0316 16:56:57.393758  286447 pod_ready.go:102] pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace has status "Ready":"False"
	I0316 16:56:57.517836  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.41122312s)
	I0316 16:56:57.517879  286447 addons.go:470] Verifying addon ingress=true in "addons-821353"
	I0316 16:56:57.520589  286447 out.go:177] * Verifying ingress addon...
	I0316 16:56:57.518043  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.391835233s)
	I0316 16:56:57.518141  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.280785204s)
	I0316 16:56:57.518198  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.047152954s)
	I0316 16:56:57.518216  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.014032317s)
	I0316 16:56:57.518237  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.996177696s)
	I0316 16:56:57.518259  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.847652503s)
	I0316 16:56:57.518352  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.482307512s)
	I0316 16:56:57.518388  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.942103416s)
	I0316 16:56:57.518466  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.927697603s)
	I0316 16:56:57.518515  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.32610099s)
	I0316 16:56:57.523216  286447 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0316 16:56:57.525893  286447 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-821353 service yakd-dashboard -n yakd-dashboard
	
	I0316 16:56:57.523727  286447 addons.go:470] Verifying addon registry=true in "addons-821353"
	I0316 16:56:57.523739  286447 addons.go:470] Verifying addon metrics-server=true in "addons-821353"
	W0316 16:56:57.523774  286447 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0316 16:56:57.528269  286447 retry.go:31] will retry after 243.456951ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0316 16:56:57.530982  286447 out.go:177] * Verifying registry addon...
	I0316 16:56:57.533859  286447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0316 16:56:57.547485  286447 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0316 16:56:57.547512  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:56:57.551505  286447 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0316 16:56:57.551528  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0316 16:56:57.558569  286447 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0316 16:56:57.772259  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0316 16:56:58.029216  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:56:58.038988  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:56:58.534743  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:56:58.542454  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:56:59.063035  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:56:59.063557  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:56:59.233779  286447 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.537016402s)
	I0316 16:56:59.236886  286447 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0316 16:56:59.233965  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.432479123s)
	I0316 16:56:59.239344  286447 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-821353"
	I0316 16:56:59.242281  286447 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0316 16:56:59.245170  286447 out.go:177] * Verifying csi-hostpath-driver addon...
	I0316 16:56:59.248065  286447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0316 16:56:59.245282  286447 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0316 16:56:59.248251  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0316 16:56:59.257251  286447 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0316 16:56:59.257280  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:56:59.313768  286447 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0316 16:56:59.313843  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0316 16:56:59.386603  286447 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0316 16:56:59.386679  286447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0316 16:56:59.417593  286447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0316 16:56:59.527766  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:56:59.539044  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:56:59.658704  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.886382314s)
	I0316 16:56:59.757234  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:56:59.866339  286447 pod_ready.go:102] pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace has status "Ready":"False"
	I0316 16:57:00.050200  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:00.052439  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:00.287412  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:00.576802  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:00.584559  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:00.601049  286447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.183414353s)
	I0316 16:57:00.604878  286447 addons.go:470] Verifying addon gcp-auth=true in "addons-821353"
	I0316 16:57:00.607242  286447 out.go:177] * Verifying gcp-auth addon...
	I0316 16:57:00.610054  286447 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0316 16:57:00.613460  286447 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0316 16:57:00.613483  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:00.754144  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:01.028465  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:01.039142  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:01.114052  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:01.254162  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:01.528282  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:01.539073  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:01.613890  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:01.755015  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:02.032059  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:02.040814  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:02.114818  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:02.254915  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:02.365191  286447 pod_ready.go:102] pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace has status "Ready":"False"
	I0316 16:57:02.528867  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:02.538921  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:02.614814  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:02.754918  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:03.028254  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:03.039505  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:03.114195  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:03.253798  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:03.527843  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:03.540333  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:03.614064  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:03.753611  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:03.866023  286447 pod_ready.go:92] pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:03.866094  286447 pod_ready.go:81] duration metric: took 10.511128064s for pod "coredns-5dd5756b68-f2clp" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.866122  286447 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m5dmk" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.868596  286447 pod_ready.go:97] error getting pod "coredns-5dd5756b68-m5dmk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-m5dmk" not found
	I0316 16:57:03.868667  286447 pod_ready.go:81] duration metric: took 2.523456ms for pod "coredns-5dd5756b68-m5dmk" in "kube-system" namespace to be "Ready" ...
	E0316 16:57:03.868692  286447 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-m5dmk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-m5dmk" not found
	I0316 16:57:03.868713  286447 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.874548  286447 pod_ready.go:92] pod "etcd-addons-821353" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:03.874615  286447 pod_ready.go:81] duration metric: took 5.869003ms for pod "etcd-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.874646  286447 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.882911  286447 pod_ready.go:92] pod "kube-apiserver-addons-821353" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:03.882981  286447 pod_ready.go:81] duration metric: took 8.31209ms for pod "kube-apiserver-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.883008  286447 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.889767  286447 pod_ready.go:92] pod "kube-controller-manager-addons-821353" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:03.889847  286447 pod_ready.go:81] duration metric: took 6.816452ms for pod "kube-controller-manager-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:03.889884  286447 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-t7nx5" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.028223  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:04.038797  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:04.059111  286447 pod_ready.go:92] pod "kube-proxy-t7nx5" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:04.059184  286447 pod_ready.go:81] duration metric: took 169.25888ms for pod "kube-proxy-t7nx5" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.059212  286447 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.114282  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:04.253902  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:04.459798  286447 pod_ready.go:92] pod "kube-scheduler-addons-821353" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:04.459823  286447 pod_ready.go:81] duration metric: took 400.589387ms for pod "kube-scheduler-addons-821353" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.459835  286447 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qkppj" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.528707  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:04.539287  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:04.613782  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:04.754687  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:04.865410  286447 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-qkppj" in "kube-system" namespace has status "Ready":"True"
	I0316 16:57:04.865478  286447 pod_ready.go:81] duration metric: took 405.631859ms for pod "nvidia-device-plugin-daemonset-qkppj" in "kube-system" namespace to be "Ready" ...
	I0316 16:57:04.865519  286447 pod_ready.go:38] duration metric: took 11.520499169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 16:57:04.865554  286447 api_server.go:52] waiting for apiserver process to appear ...
	I0316 16:57:04.865644  286447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 16:57:04.882094  286447 api_server.go:72] duration metric: took 15.920969088s to wait for apiserver process to appear ...
	I0316 16:57:04.882169  286447 api_server.go:88] waiting for apiserver healthz status ...
	I0316 16:57:04.882206  286447 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0316 16:57:04.892166  286447 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0316 16:57:04.893620  286447 api_server.go:141] control plane version: v1.28.4
	I0316 16:57:04.893641  286447 api_server.go:131] duration metric: took 11.452797ms to wait for apiserver health ...
	I0316 16:57:04.893650  286447 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 16:57:05.029442  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:05.039966  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:05.068541  286447 system_pods.go:59] 18 kube-system pods found
	I0316 16:57:05.068619  286447 system_pods.go:61] "coredns-5dd5756b68-f2clp" [09361a3d-f73f-47c3-b3c0-c6839923762d] Running
	I0316 16:57:05.068645  286447 system_pods.go:61] "csi-hostpath-attacher-0" [5e9d2931-45c4-485d-8449-6e57db00d9f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0316 16:57:05.068685  286447 system_pods.go:61] "csi-hostpath-resizer-0" [6d85906a-f486-4a90-841d-5941f7ce923a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0316 16:57:05.068717  286447 system_pods.go:61] "csi-hostpathplugin-nhfb7" [5a5e57d4-d32c-4574-a827-efafaf1f1d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0316 16:57:05.068737  286447 system_pods.go:61] "etcd-addons-821353" [40661096-f35e-4d30-b046-901e7b8c62b7] Running
	I0316 16:57:05.068757  286447 system_pods.go:61] "kindnet-dk6xn" [dad8d14a-18e4-428c-966a-8bde9775b396] Running
	I0316 16:57:05.068796  286447 system_pods.go:61] "kube-apiserver-addons-821353" [f17f3053-03a8-4634-afa4-aa59927a2f80] Running
	I0316 16:57:05.068821  286447 system_pods.go:61] "kube-controller-manager-addons-821353" [bb116110-143e-4010-b2f8-3864624edf29] Running
	I0316 16:57:05.068844  286447 system_pods.go:61] "kube-ingress-dns-minikube" [954f0159-14a4-4c47-a9a2-3199d34dbb3e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0316 16:57:05.068867  286447 system_pods.go:61] "kube-proxy-t7nx5" [e67dcab1-801c-4724-b7d4-008af403815f] Running
	I0316 16:57:05.068903  286447 system_pods.go:61] "kube-scheduler-addons-821353" [886dc577-8746-4d34-b5f3-5e20cf25b77e] Running
	I0316 16:57:05.068934  286447 system_pods.go:61] "metrics-server-69cf46c98-2rrqs" [08f4dba8-7457-424a-8c11-cc37bee4ee10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 16:57:05.068955  286447 system_pods.go:61] "nvidia-device-plugin-daemonset-qkppj" [c0c89264-7552-4313-b0e3-a9203afe811f] Running
	I0316 16:57:05.068981  286447 system_pods.go:61] "registry-8vzh9" [b1612e75-26f6-4416-91f0-f432903d0021] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0316 16:57:05.069015  286447 system_pods.go:61] "registry-proxy-gpbwt" [5aa0d90a-8b19-4457-ac50-9fc1a7169ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0316 16:57:05.069045  286447 system_pods.go:61] "snapshot-controller-58dbcc7b99-464tm" [ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0316 16:57:05.069089  286447 system_pods.go:61] "snapshot-controller-58dbcc7b99-g9g9q" [f54bcac9-de9a-4ac5-8da6-174e461156d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0316 16:57:05.069135  286447 system_pods.go:61] "storage-provisioner" [41f06c82-3bea-4b9c-b8a3-13b9c24f9091] Running
	I0316 16:57:05.069168  286447 system_pods.go:74] duration metric: took 175.511446ms to wait for pod list to return data ...
	I0316 16:57:05.069196  286447 default_sa.go:34] waiting for default service account to be created ...
	I0316 16:57:05.116244  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:05.255277  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:05.259517  286447 default_sa.go:45] found service account: "default"
	I0316 16:57:05.259541  286447 default_sa.go:55] duration metric: took 190.322762ms for default service account to be created ...
	I0316 16:57:05.259551  286447 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 16:57:05.467767  286447 system_pods.go:86] 18 kube-system pods found
	I0316 16:57:05.467802  286447 system_pods.go:89] "coredns-5dd5756b68-f2clp" [09361a3d-f73f-47c3-b3c0-c6839923762d] Running
	I0316 16:57:05.467812  286447 system_pods.go:89] "csi-hostpath-attacher-0" [5e9d2931-45c4-485d-8449-6e57db00d9f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0316 16:57:05.467837  286447 system_pods.go:89] "csi-hostpath-resizer-0" [6d85906a-f486-4a90-841d-5941f7ce923a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0316 16:57:05.467855  286447 system_pods.go:89] "csi-hostpathplugin-nhfb7" [5a5e57d4-d32c-4574-a827-efafaf1f1d91] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0316 16:57:05.467862  286447 system_pods.go:89] "etcd-addons-821353" [40661096-f35e-4d30-b046-901e7b8c62b7] Running
	I0316 16:57:05.467871  286447 system_pods.go:89] "kindnet-dk6xn" [dad8d14a-18e4-428c-966a-8bde9775b396] Running
	I0316 16:57:05.467876  286447 system_pods.go:89] "kube-apiserver-addons-821353" [f17f3053-03a8-4634-afa4-aa59927a2f80] Running
	I0316 16:57:05.467880  286447 system_pods.go:89] "kube-controller-manager-addons-821353" [bb116110-143e-4010-b2f8-3864624edf29] Running
	I0316 16:57:05.467890  286447 system_pods.go:89] "kube-ingress-dns-minikube" [954f0159-14a4-4c47-a9a2-3199d34dbb3e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0316 16:57:05.467894  286447 system_pods.go:89] "kube-proxy-t7nx5" [e67dcab1-801c-4724-b7d4-008af403815f] Running
	I0316 16:57:05.467901  286447 system_pods.go:89] "kube-scheduler-addons-821353" [886dc577-8746-4d34-b5f3-5e20cf25b77e] Running
	I0316 16:57:05.467927  286447 system_pods.go:89] "metrics-server-69cf46c98-2rrqs" [08f4dba8-7457-424a-8c11-cc37bee4ee10] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 16:57:05.467938  286447 system_pods.go:89] "nvidia-device-plugin-daemonset-qkppj" [c0c89264-7552-4313-b0e3-a9203afe811f] Running
	I0316 16:57:05.467945  286447 system_pods.go:89] "registry-8vzh9" [b1612e75-26f6-4416-91f0-f432903d0021] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0316 16:57:05.467956  286447 system_pods.go:89] "registry-proxy-gpbwt" [5aa0d90a-8b19-4457-ac50-9fc1a7169ebc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0316 16:57:05.467963  286447 system_pods.go:89] "snapshot-controller-58dbcc7b99-464tm" [ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0316 16:57:05.467974  286447 system_pods.go:89] "snapshot-controller-58dbcc7b99-g9g9q" [f54bcac9-de9a-4ac5-8da6-174e461156d8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0316 16:57:05.467979  286447 system_pods.go:89] "storage-provisioner" [41f06c82-3bea-4b9c-b8a3-13b9c24f9091] Running
	I0316 16:57:05.468000  286447 system_pods.go:126] duration metric: took 208.431461ms to wait for k8s-apps to be running ...
	I0316 16:57:05.468015  286447 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 16:57:05.468089  286447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 16:57:05.482453  286447 system_svc.go:56] duration metric: took 14.427942ms WaitForService to wait for kubelet
	I0316 16:57:05.482484  286447 kubeadm.go:576] duration metric: took 16.521364822s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 16:57:05.482504  286447 node_conditions.go:102] verifying NodePressure condition ...
	I0316 16:57:05.529361  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:05.539244  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:05.613957  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:05.659335  286447 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0316 16:57:05.659369  286447 node_conditions.go:123] node cpu capacity is 2
	I0316 16:57:05.659381  286447 node_conditions.go:105] duration metric: took 176.871987ms to run NodePressure ...
	I0316 16:57:05.659395  286447 start.go:240] waiting for startup goroutines ...
	I0316 16:57:05.754352  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:06.029348  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:06.040495  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:06.114984  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:06.253733  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:06.528333  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:06.539281  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:06.615780  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:06.765870  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:07.028704  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:07.039581  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:07.114459  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:07.254515  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:07.528673  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:07.538962  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:07.613319  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:07.766292  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:08.028972  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:08.042022  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:08.114457  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:08.254162  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:08.528540  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:08.541344  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:08.614287  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:08.756578  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:09.028556  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:09.039751  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:09.114773  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:09.253794  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:09.528142  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:09.539703  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:09.614557  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:09.754813  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:10.029590  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:10.040610  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:10.114736  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:10.254731  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:10.528942  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:10.540022  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:10.614243  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:10.755693  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:11.029224  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:11.039403  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:11.116300  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:11.255585  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:11.528979  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:11.538827  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:11.614831  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:11.755235  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:12.029211  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:12.039410  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:12.114844  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:12.254637  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:12.528975  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:12.539064  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:12.614015  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:12.767528  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:13.031774  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:13.038271  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:13.115075  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:13.254787  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:13.529350  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:13.539150  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:13.614298  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:13.755023  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:14.043213  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:14.046828  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:14.116449  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:14.254190  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:14.529082  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:14.538569  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:14.614448  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:14.753960  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:15.029259  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:15.040650  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:15.118033  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:15.255491  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:15.528775  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:15.543193  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:15.614275  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:15.756390  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:16.029319  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:16.039387  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:16.114279  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:16.254249  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:16.528535  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:16.539046  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:16.613730  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:16.754876  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:17.029021  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:17.038884  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:17.113850  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:17.254283  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:17.529411  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:17.539861  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:17.614595  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:17.754487  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:18.029468  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:18.039408  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:18.114724  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:18.254435  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:18.528482  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:18.539582  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0316 16:57:18.623884  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:18.754773  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:19.028703  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:19.039569  286447 kapi.go:107] duration metric: took 21.505710164s to wait for kubernetes.io/minikube-addons=registry ...
	I0316 16:57:19.119042  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:19.254844  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:19.528607  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:19.614583  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:19.760004  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:20.033663  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:20.115282  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:20.255636  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:20.534061  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:20.613747  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:20.754601  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:21.028335  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:21.117489  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:21.253552  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:21.528126  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:21.614471  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:21.762605  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:22.029110  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:22.113670  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:22.254402  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:22.528430  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:22.613993  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:22.754245  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:23.028613  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:23.114470  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:23.254596  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:23.529675  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:23.614134  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:23.770543  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:24.028836  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:24.115051  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:24.254605  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:24.527921  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:24.614537  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:24.757716  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:25.030855  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:25.114728  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:25.255176  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:25.528759  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:25.614432  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:25.755209  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:26.028306  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:26.114181  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:26.255968  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:26.528752  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:26.614239  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:26.753523  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:27.030021  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:27.114629  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:27.256745  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:27.528879  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:27.614629  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:27.754405  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:28.028435  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:28.114099  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:28.256166  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:28.527966  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:28.613796  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:28.754215  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:29.028646  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:29.114062  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:29.255771  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:29.528260  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:29.614280  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:29.753705  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:30.031998  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:30.116137  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:30.254077  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:30.529040  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:30.615720  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:30.754410  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:31.029149  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:31.117318  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:31.260129  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:31.528663  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:31.613923  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:31.754247  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:32.031418  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:32.114966  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:32.254504  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:32.529497  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:32.614462  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:32.754832  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:33.031403  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:33.114385  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:33.254348  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:33.528314  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:33.614412  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:33.754048  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:34.039147  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:34.115545  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:34.253961  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:34.528197  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:34.613810  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:34.765204  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:35.054588  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:35.114985  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:35.254534  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:35.529122  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:35.614473  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:35.754632  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:36.029901  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:36.115214  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:36.253935  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:36.528822  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:36.614482  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:36.755556  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:37.030566  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:37.113848  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:37.254296  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:37.530237  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:37.613870  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:37.754723  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:38.029557  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:38.116328  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:38.254888  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:38.527929  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:38.617476  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:38.755049  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:39.029934  286447 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0316 16:57:39.125802  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:39.254320  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:39.528105  286447 kapi.go:107] duration metric: took 42.004884906s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0316 16:57:39.613647  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:39.754753  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:40.114212  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:40.254488  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:40.615664  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:40.756549  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:41.114351  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:41.254145  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:41.613955  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:41.754733  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:42.116011  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:42.257537  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:42.614244  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:42.761219  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:43.114052  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0316 16:57:43.254171  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:43.613988  286447 kapi.go:107] duration metric: took 43.003931909s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0316 16:57:43.616971  286447 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-821353 cluster.
	I0316 16:57:43.619969  286447 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0316 16:57:43.621885  286447 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0316 16:57:43.754056  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:44.254385  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:44.753949  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:45.256509  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:45.754156  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:46.253947  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:46.754080  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:47.254451  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:47.754558  286447 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0316 16:57:48.257953  286447 kapi.go:107] duration metric: took 49.00988459s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0316 16:57:48.264681  286447 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, yakd, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0316 16:57:48.266416  286447 addons.go:505] duration metric: took 59.305049278s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget yakd metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0316 16:57:48.266479  286447 start.go:245] waiting for cluster config update ...
	I0316 16:57:48.266500  286447 start.go:254] writing updated cluster config ...
	I0316 16:57:48.266808  286447 ssh_runner.go:195] Run: rm -f paused
	I0316 16:57:48.623045  286447 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 16:57:48.627041  286447 out.go:177] * Done! kubectl is now configured to use "addons-821353" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	cbf484ef63ff4       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app            2                   5c8ff5d13da2a       hello-world-app-5d77478584-fm2bn
	3c581e40bb3b5       be5e6f23a9904       32 seconds ago       Running             nginx                      0                   078a81308510c       nginx
	820000ada007f       6ef582f3ec844       About a minute ago   Running             gcp-auth                   0                   859eacaa0d957       gcp-auth-7d69788767-q27cs
	973b3e4e460f0       6505abd14fdf8       About a minute ago   Exited              controller                 0                   ee33e8b76c582       ingress-nginx-controller-76dc478dd8-nj72d
	d6031f66704cb       1a024e390dd05       About a minute ago   Exited              patch                      0                   4ccc9afb83ab9       ingress-nginx-admission-patch-lbhw4
	609d0fb3f51d8       1a024e390dd05       About a minute ago   Exited              create                     0                   1828e03827932       ingress-nginx-admission-create-zj4hv
	38141e9e73ef1       20e3f2db01e81       About a minute ago   Running             yakd                       0                   a0e6d5154a4fd       yakd-dashboard-9947fc6bf-qmd85
	76dc97c0122cf       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator     0                   2c5e704cf5e36       cloud-spanner-emulator-6548d5df46-rw4w5
	e97bb816a8900       7ce2150c8929b       About a minute ago   Running             local-path-provisioner     0                   204fdb81b9886       local-path-provisioner-78b46b4d5c-jlhwx
	57214dd807796       97e04611ad434       About a minute ago   Running             coredns                    0                   b0b092203caaf       coredns-5dd5756b68-f2clp
	8aa7392492714       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr   0                   a04b32f31e98e       nvidia-device-plugin-daemonset-qkppj
	4d97eb2ec6b0b       ba04bb24b9575       2 minutes ago        Running             storage-provisioner        0                   6edfd6ce1692d       storage-provisioner
	af43cbc5f6d18       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                0                   fa5df2cf5d61d       kindnet-dk6xn
	9d4adf3859b65       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                 0                   b7d96b05fa1d4       kube-proxy-t7nx5
	69b92716761f5       9961cbceaf234       2 minutes ago        Running             kube-controller-manager    0                   85a4ef15d276c       kube-controller-manager-addons-821353
	740a77897f018       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver             0                   e8e94170b4e82       kube-apiserver-addons-821353
	a98441f33d393       05c284c929889       2 minutes ago        Running             kube-scheduler             0                   68dee01f19170       kube-scheduler-addons-821353
	2214a4311ebaa       9cdd6470f48c8       2 minutes ago        Running             etcd                       0                   2fe29b6085c91       etcd-addons-821353
	
	
	==> containerd <==
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.926986181Z" level=info msg="StopContainer for \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\" returns successfully"
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.927729742Z" level=info msg="StopPodSandbox for \"ec43d524f625fbbb1b2692a8eb31e89bf7640740e1a715904078e6a6db525bd5\""
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.927883636Z" level=info msg="Container to stop \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.958621252Z" level=warning msg="cleanup warnings time=\"2024-03-16T16:58:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9198 runtime=io.containerd.runc.v2\n"
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.962027607Z" level=info msg="StopContainer for \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\" returns successfully"
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.962841354Z" level=info msg="StopPodSandbox for \"27b5210dfd1d5708d61989f4ca41e314977a8d1905e3fe7f9b47e1b715efeed5\""
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.962907675Z" level=info msg="Container to stop \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.966222437Z" level=info msg="shim disconnected" id=ec43d524f625fbbb1b2692a8eb31e89bf7640740e1a715904078e6a6db525bd5
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.966290531Z" level=warning msg="cleaning up after shim disconnected" id=ec43d524f625fbbb1b2692a8eb31e89bf7640740e1a715904078e6a6db525bd5 namespace=k8s.io
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.966303315Z" level=info msg="cleaning up dead shim"
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.982213851Z" level=warning msg="cleanup warnings time=\"2024-03-16T16:58:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9235 runtime=io.containerd.runc.v2\n"
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.996725592Z" level=info msg="shim disconnected" id=27b5210dfd1d5708d61989f4ca41e314977a8d1905e3fe7f9b47e1b715efeed5
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.997016824Z" level=warning msg="cleaning up after shim disconnected" id=27b5210dfd1d5708d61989f4ca41e314977a8d1905e3fe7f9b47e1b715efeed5 namespace=k8s.io
	Mar 16 16:58:53 addons-821353 containerd[759]: time="2024-03-16T16:58:53.997032331Z" level=info msg="cleaning up dead shim"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.010011914Z" level=info msg="TearDown network for sandbox \"ec43d524f625fbbb1b2692a8eb31e89bf7640740e1a715904078e6a6db525bd5\" successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.010089608Z" level=info msg="StopPodSandbox for \"ec43d524f625fbbb1b2692a8eb31e89bf7640740e1a715904078e6a6db525bd5\" returns successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.014046089Z" level=warning msg="cleanup warnings time=\"2024-03-16T16:58:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9288 runtime=io.containerd.runc.v2\n"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.060582520Z" level=info msg="TearDown network for sandbox \"27b5210dfd1d5708d61989f4ca41e314977a8d1905e3fe7f9b47e1b715efeed5\" successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.060814272Z" level=info msg="StopPodSandbox for \"27b5210dfd1d5708d61989f4ca41e314977a8d1905e3fe7f9b47e1b715efeed5\" returns successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.571799667Z" level=info msg="RemoveContainer for \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\""
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.580812832Z" level=info msg="RemoveContainer for \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\" returns successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.584378620Z" level=error msg="ContainerStatus for \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\": not found"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.586048797Z" level=info msg="RemoveContainer for \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\""
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.595712264Z" level=info msg="RemoveContainer for \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\" returns successfully"
	Mar 16 16:58:54 addons-821353 containerd[759]: time="2024-03-16T16:58:54.599781688Z" level=error msg="ContainerStatus for \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\": not found"
	
	
	==> coredns [57214dd807796f8852bb66ae12d24d9348ebb74a0c1a4cdb06877e877bc6051a] <==
	[INFO] 10.244.0.19:44091 - 12220 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000306459s
	[INFO] 10.244.0.19:44091 - 33723 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063606s
	[INFO] 10.244.0.19:44091 - 29342 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001330388s
	[INFO] 10.244.0.19:50297 - 20611 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.003041311s
	[INFO] 10.244.0.19:50297 - 43809 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000312236s
	[INFO] 10.244.0.19:44091 - 50195 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001053844s
	[INFO] 10.244.0.19:44091 - 14149 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098961s
	[INFO] 10.244.0.19:36835 - 49820 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000140463s
	[INFO] 10.244.0.19:43613 - 6337 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000355764s
	[INFO] 10.244.0.19:36835 - 18794 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051914s
	[INFO] 10.244.0.19:43613 - 10895 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101817s
	[INFO] 10.244.0.19:36835 - 51035 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107183s
	[INFO] 10.244.0.19:36835 - 48936 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074453s
	[INFO] 10.244.0.19:43613 - 63786 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085767s
	[INFO] 10.244.0.19:36835 - 61370 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081107s
	[INFO] 10.244.0.19:36835 - 63967 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077751s
	[INFO] 10.244.0.19:43613 - 43955 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000216992s
	[INFO] 10.244.0.19:43613 - 23045 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085941s
	[INFO] 10.244.0.19:43613 - 23402 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082593s
	[INFO] 10.244.0.19:36835 - 17036 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002311076s
	[INFO] 10.244.0.19:43613 - 63156 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001654234s
	[INFO] 10.244.0.19:36835 - 37834 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001266224s
	[INFO] 10.244.0.19:36835 - 20049 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000079803s
	[INFO] 10.244.0.19:43613 - 9055 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001317564s
	[INFO] 10.244.0.19:43613 - 58941 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000085694s
	
	
	==> describe nodes <==
	Name:               addons-821353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-821353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcb7bcec19ba52ac09364e1139fb2071215a1bc6
	                    minikube.k8s.io/name=addons-821353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T16_56_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-821353
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 16:56:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-821353
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 16:58:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 16:58:37 +0000   Sat, 16 Mar 2024 16:56:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 16:58:37 +0000   Sat, 16 Mar 2024 16:56:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 16:58:37 +0000   Sat, 16 Mar 2024 16:56:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 16:58:37 +0000   Sat, 16 Mar 2024 16:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-821353
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 84a283e61ab74080b0ede3bf1eb2071e
	  System UUID:                f3fd6daf-9bb0-435e-aba5-9530287a5912
	  Boot ID:                    183b8861-7db8-4da8-9969-d0fd94fbc14e
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-rw4w5    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  default                     hello-world-app-5d77478584-fm2bn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-7d69788767-q27cs                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 coredns-5dd5756b68-f2clp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m9s
	  kube-system                 etcd-addons-821353                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m23s
	  kube-system                 kindnet-dk6xn                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m10s
	  kube-system                 kube-apiserver-addons-821353               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-addons-821353      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-proxy-t7nx5                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-scheduler-addons-821353               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 nvidia-device-plugin-daemonset-qkppj       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  local-path-storage          local-path-provisioner-78b46b4d5c-jlhwx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-qmd85             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m8s   kube-proxy       
	  Normal  Starting                 2m23s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s  kubelet          Node addons-821353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s  kubelet          Node addons-821353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s  kubelet          Node addons-821353 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m23s  kubelet          Node addons-821353 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m23s  kubelet          Node addons-821353 status is now: NodeReady
	  Normal  RegisteredNode           2m10s  node-controller  Node addons-821353 event: Registered Node addons-821353 in Controller
	
	
	==> dmesg <==
	[  +0.004281] FS-Cache: Duplicate cookie detected
	[  +0.000717] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001075] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=000000009f5a83bd
	[  +0.001196] FS-Cache: O-key=[8] '90385c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000969] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=000000002fb2cb4a
	[  +0.001203] FS-Cache: N-key=[8] '90385c0100000000'
	[  +2.796045] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=0000000021646803
	[  +0.001142] FS-Cache: O-key=[8] '8f385c0100000000'
	[  +0.000807] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001014] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000c172e29f
	[  +0.001175] FS-Cache: N-key=[8] '8f385c0100000000'
	[  +0.351941] FS-Cache: Duplicate cookie detected
	[  +0.000788] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001022] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=000000005121496a
	[  +0.001127] FS-Cache: O-key=[8] '9a385c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000a2827aa8
	[  +0.001107] FS-Cache: N-key=[8] '9a385c0100000000'
	[Mar16 15:48] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Mar16 15:53] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45/fs': -2
	[  +1.903676] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/27/fs': -2
	[Mar16 15:56] hrtimer: interrupt took 4205539 ns
	
	
	==> etcd [2214a4311ebaa71431c4b1536585df11497d0da8485cb4feaca8445c4765c3a0] <==
	{"level":"info","ts":"2024-03-16T16:56:28.193952Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-16T16:56:28.194028Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-16T16:56:28.195484Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-16T16:56:28.195639Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-16T16:56:28.195652Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-16T16:56:28.196258Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-16T16:56:28.196285Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-16T16:56:28.579633Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-16T16:56:28.579685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-16T16:56:28.579702Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-16T16:56:28.579725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-16T16:56:28.579738Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-16T16:56:28.579749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-16T16:56:28.579764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-16T16:56:28.583704Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T16:56:28.58775Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T16:56:28.587827Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T16:56:28.587849Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T16:56:28.587869Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-821353 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T16:56:28.587886Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T16:56:28.588849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T16:56:28.589037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T16:56:28.589876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-16T16:56:28.603669Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T16:56:28.603709Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [820000ada007f9004c14e375b68b4456d30163e32cae5c6822711b8485a4624e] <==
	2024/03/16 16:57:42 GCP Auth Webhook started!
	2024/03/16 16:57:59 Ready to marshal response ...
	2024/03/16 16:57:59 Ready to write response ...
	2024/03/16 16:58:20 Ready to marshal response ...
	2024/03/16 16:58:20 Ready to write response ...
	2024/03/16 16:58:22 Ready to marshal response ...
	2024/03/16 16:58:22 Ready to write response ...
	2024/03/16 16:58:32 Ready to marshal response ...
	2024/03/16 16:58:32 Ready to write response ...
	2024/03/16 16:58:37 Ready to marshal response ...
	2024/03/16 16:58:37 Ready to write response ...
	
	
	==> kernel <==
	 16:58:58 up  2:41,  0 users,  load average: 1.84, 1.28, 0.72
	Linux addons-821353 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [af43cbc5f6d1800e5d6d2f8c1361de5d55cd536e7bb53c01a18aa5a2d314fc50] <==
	I0316 16:56:52.025421       1 main.go:227] handling current node
	I0316 16:57:02.041548       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:02.041577       1 main.go:227] handling current node
	I0316 16:57:12.053367       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:12.053395       1 main.go:227] handling current node
	I0316 16:57:22.057406       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:22.057431       1 main.go:227] handling current node
	I0316 16:57:32.064933       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:32.064965       1 main.go:227] handling current node
	I0316 16:57:42.077037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:42.077069       1 main.go:227] handling current node
	I0316 16:57:52.088239       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:57:52.088267       1 main.go:227] handling current node
	I0316 16:58:02.100489       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:02.100519       1 main.go:227] handling current node
	I0316 16:58:12.104898       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:12.104929       1 main.go:227] handling current node
	I0316 16:58:22.117661       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:22.117696       1 main.go:227] handling current node
	I0316 16:58:32.130649       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:32.130939       1 main.go:227] handling current node
	I0316 16:58:42.144939       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:42.144970       1 main.go:227] handling current node
	I0316 16:58:52.157266       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0316 16:58:52.157525       1 main.go:227] handling current node
	
	
	==> kube-apiserver [740a77897f018067baed50bdd4a7bf467ceba1b8cef4b538be1a4ab610b5bf26] <==
	I0316 16:58:23.074934       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.129.118"}
	E0316 16:58:31.979696       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I0316 16:58:32.337281       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0316 16:58:32.890557       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.3.163"}
	E0316 16:58:41.980939       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0316 16:58:51.981940       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I0316 16:58:53.575562       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.575949       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.663920       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.664478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.674644       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.674772       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.690783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.691097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.713345       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.713403       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.715345       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.715406       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.738168       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.738289       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0316 16:58:53.750228       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0316 16:58:53.750373       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0316 16:58:54.690956       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0316 16:58:54.750744       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0316 16:58:54.774683       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [69b92716761f5602c84e1a50edcabfe627d1c44cb74256dfed7d581bfe4bf241] <==
	E0316 16:58:37.733870       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0316 16:58:46.963506       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0316 16:58:47.055957       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	I0316 16:58:50.338769       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0316 16:58:50.348824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="4.652µs"
	I0316 16:58:50.349944       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0316 16:58:50.564781       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="80.565µs"
	I0316 16:58:53.810722       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="7.221µs"
	E0316 16:58:54.692777       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:54.752600       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:54.776530       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:55.865187       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:55.865221       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:55.896117       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:55.896157       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:56.029273       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:56.029310       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:57.925422       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:57.925476       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:58.241196       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:58.241233       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:58.484135       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:58.484170       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0316 16:58:58.585415       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0316 16:58:58.585448       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [9d4adf3859b65af5412b4d4adb0ad4e9d6d95318fd8dfd1a46170783a4db1023] <==
	I0316 16:56:49.959267       1 server_others.go:69] "Using iptables proxy"
	I0316 16:56:49.982698       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0316 16:56:50.104347       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0316 16:56:50.126221       1 server_others.go:152] "Using iptables Proxier"
	I0316 16:56:50.126259       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0316 16:56:50.126266       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0316 16:56:50.126297       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 16:56:50.126518       1 server.go:846] "Version info" version="v1.28.4"
	I0316 16:56:50.126529       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 16:56:50.130646       1 config.go:188] "Starting service config controller"
	I0316 16:56:50.136329       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 16:56:50.136398       1 config.go:97] "Starting endpoint slice config controller"
	I0316 16:56:50.136406       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 16:56:50.137477       1 config.go:315] "Starting node config controller"
	I0316 16:56:50.137487       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 16:56:50.237460       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 16:56:50.237502       1 shared_informer.go:318] Caches are synced for node config
	I0316 16:56:50.237514       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [a98441f33d393f8126f0865b08c6b7c1a118744f0565dedaa1e9b2a31671db16] <==
	W0316 16:56:32.912592       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0316 16:56:32.912608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0316 16:56:32.912644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0316 16:56:32.912659       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0316 16:56:32.912791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0316 16:56:32.912816       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0316 16:56:32.912879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0316 16:56:32.912897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0316 16:56:32.912936       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0316 16:56:32.912955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0316 16:56:32.913001       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0316 16:56:32.913018       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0316 16:56:32.913055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0316 16:56:32.913068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0316 16:56:32.913128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0316 16:56:32.913142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0316 16:56:32.913207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0316 16:56:32.913223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0316 16:56:32.913272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0316 16:56:32.913287       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0316 16:56:32.913313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0316 16:56:32.913328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0316 16:56:32.914613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0316 16:56:32.914645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0316 16:56:34.504476       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 16:58:51 addons-821353 kubelet[1501]: I0316 16:58:51.141256    1501 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="954f0159-14a4-4c47-a9a2-3199d34dbb3e" path="/var/lib/kubelet/pods/954f0159-14a4-4c47-a9a2-3199d34dbb3e/volumes"
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.560627    1501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee33e8b76c582079f8ba9966add7ad06bc134c1259cace519aa88297be1622c2"
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.803485    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8hsg\" (UniqueName: \"kubernetes.io/projected/cec18c7f-419f-4d35-8144-82ca6a7d846b-kube-api-access-p8hsg\") pod \"cec18c7f-419f-4d35-8144-82ca6a7d846b\" (UID: \"cec18c7f-419f-4d35-8144-82ca6a7d846b\") "
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.803539    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cec18c7f-419f-4d35-8144-82ca6a7d846b-webhook-cert\") pod \"cec18c7f-419f-4d35-8144-82ca6a7d846b\" (UID: \"cec18c7f-419f-4d35-8144-82ca6a7d846b\") "
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.808158    1501 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec18c7f-419f-4d35-8144-82ca6a7d846b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cec18c7f-419f-4d35-8144-82ca6a7d846b" (UID: "cec18c7f-419f-4d35-8144-82ca6a7d846b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.808530    1501 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec18c7f-419f-4d35-8144-82ca6a7d846b-kube-api-access-p8hsg" (OuterVolumeSpecName: "kube-api-access-p8hsg") pod "cec18c7f-419f-4d35-8144-82ca6a7d846b" (UID: "cec18c7f-419f-4d35-8144-82ca6a7d846b"). InnerVolumeSpecName "kube-api-access-p8hsg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.903872    1501 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cec18c7f-419f-4d35-8144-82ca6a7d846b-webhook-cert\") on node \"addons-821353\" DevicePath \"\""
	Mar 16 16:58:53 addons-821353 kubelet[1501]: I0316 16:58:53.903908    1501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p8hsg\" (UniqueName: \"kubernetes.io/projected/cec18c7f-419f-4d35-8144-82ca6a7d846b-kube-api-access-p8hsg\") on node \"addons-821353\" DevicePath \"\""
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.107871    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjs6\" (UniqueName: \"kubernetes.io/projected/ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82-kube-api-access-jpjs6\") pod \"ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82\" (UID: \"ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82\") "
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.110453    1501 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82-kube-api-access-jpjs6" (OuterVolumeSpecName: "kube-api-access-jpjs6") pod "ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82" (UID: "ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82"). InnerVolumeSpecName "kube-api-access-jpjs6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.209380    1501 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8cn\" (UniqueName: \"kubernetes.io/projected/f54bcac9-de9a-4ac5-8da6-174e461156d8-kube-api-access-jj8cn\") pod \"f54bcac9-de9a-4ac5-8da6-174e461156d8\" (UID: \"f54bcac9-de9a-4ac5-8da6-174e461156d8\") "
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.209469    1501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jpjs6\" (UniqueName: \"kubernetes.io/projected/ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82-kube-api-access-jpjs6\") on node \"addons-821353\" DevicePath \"\""
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.211280    1501 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54bcac9-de9a-4ac5-8da6-174e461156d8-kube-api-access-jj8cn" (OuterVolumeSpecName: "kube-api-access-jj8cn") pod "f54bcac9-de9a-4ac5-8da6-174e461156d8" (UID: "f54bcac9-de9a-4ac5-8da6-174e461156d8"). InnerVolumeSpecName "kube-api-access-jj8cn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.310053    1501 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jj8cn\" (UniqueName: \"kubernetes.io/projected/f54bcac9-de9a-4ac5-8da6-174e461156d8-kube-api-access-jj8cn\") on node \"addons-821353\" DevicePath \"\""
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.564762    1501 scope.go:117] "RemoveContainer" containerID="2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.581998    1501 scope.go:117] "RemoveContainer" containerID="2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: E0316 16:58:54.584684    1501 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\": not found" containerID="2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.584797    1501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b"} err="failed to get container status \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\": rpc error: code = NotFound desc = an error occurred when try to find container \"2bfb2d2b14e653982d947928d86e156b14931f36b1035a851f3a40bffeaad04b\": not found"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.584871    1501 scope.go:117] "RemoveContainer" containerID="332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.598523    1501 scope.go:117] "RemoveContainer" containerID="332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: E0316 16:58:54.600007    1501 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\": not found" containerID="332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826"
	Mar 16 16:58:54 addons-821353 kubelet[1501]: I0316 16:58:54.600048    1501 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826"} err="failed to get container status \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\": rpc error: code = NotFound desc = an error occurred when try to find container \"332ea6a03076e66fec1a9990ef5d25971945e21d3f9298788e98b5f6cccff826\": not found"
	Mar 16 16:58:55 addons-821353 kubelet[1501]: I0316 16:58:55.140922    1501 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82" path="/var/lib/kubelet/pods/ae19a8d7-5e89-4efb-9f40-f2a60d1f9b82/volumes"
	Mar 16 16:58:55 addons-821353 kubelet[1501]: I0316 16:58:55.141620    1501 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cec18c7f-419f-4d35-8144-82ca6a7d846b" path="/var/lib/kubelet/pods/cec18c7f-419f-4d35-8144-82ca6a7d846b/volumes"
	Mar 16 16:58:55 addons-821353 kubelet[1501]: I0316 16:58:55.142129    1501 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f54bcac9-de9a-4ac5-8da6-174e461156d8" path="/var/lib/kubelet/pods/f54bcac9-de9a-4ac5-8da6-174e461156d8/volumes"
	
	
	==> storage-provisioner [4d97eb2ec6b0b0a85fa60c9cf0c4c53084d753ce07139761c56ffa799fd0c303] <==
	I0316 16:56:56.989265       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 16:56:57.009849       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 16:56:57.009935       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 16:56:57.021680       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 16:56:57.023384       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-821353_e31a00f5-05e6-4977-b9cb-7ed7c7ef4ddf!
	I0316 16:56:57.029304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c7d14e3f-b3fc-4211-af39-9eb35c81cbef", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-821353_e31a00f5-05e6-4977-b9cb-7ed7c7ef4ddf became leader
	I0316 16:56:57.123823       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-821353_e31a00f5-05e6-4977-b9cb-7ed7c7ef4ddf!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-821353 -n addons-821353
helpers_test.go:261: (dbg) Run:  kubectl --context addons-821353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr: (4.045191135s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193375" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr: (3.210210327s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193375" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.624601058s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-193375
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 image load --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr: (3.200545511s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-193375" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image save gcr.io/google-containers/addon-resizer:functional-193375 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0316 17:05:06.525474  318974 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:05:06.525999  318974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:06.526012  318974 out.go:304] Setting ErrFile to fd 2...
	I0316 17:05:06.526018  318974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:06.526459  318974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:05:06.527145  318974 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:05:06.527276  318974 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:05:06.527926  318974 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
	I0316 17:05:06.545123  318974 ssh_runner.go:195] Run: systemctl --version
	I0316 17:05:06.545229  318974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
	I0316 17:05:06.561683  318974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
	I0316 17:05:06.656087  318974 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0316 17:05:06.656146  318974 cache_images.go:254] Failed to load cached images for profile functional-193375. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0316 17:05:06.656171  318974 cache_images.go:262] succeeded pushing to: 
	I0316 17:05:06.656176  318974 cache_images.go:263] failed pushing to: functional-193375

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-746380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0316 17:42:11.257975  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-746380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.297680244s)

                                                
                                                
-- stdout --
	* [old-k8s-version-746380] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-746380" primary control-plane node in "old-k8s-version-746380" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Restarting existing docker container for "old-k8s-version-746380" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-746380 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:41:59.947289  481631 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:41:59.949283  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:41:59.949301  481631 out.go:304] Setting ErrFile to fd 2...
	I0316 17:41:59.949308  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:41:59.949659  481631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:41:59.950119  481631 out.go:298] Setting JSON to false
	I0316 17:41:59.951214  481631 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12266,"bootTime":1710598654,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 17:41:59.951295  481631 start.go:139] virtualization:  
	I0316 17:41:59.957140  481631 out.go:177] * [old-k8s-version-746380] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 17:41:59.959256  481631 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:41:59.959291  481631 notify.go:220] Checking for updates...
	I0316 17:41:59.963497  481631 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:41:59.965308  481631 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:41:59.966996  481631 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 17:41:59.968570  481631 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 17:41:59.970433  481631 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:41:59.972761  481631 config.go:182] Loaded profile config "old-k8s-version-746380": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 17:41:59.976200  481631 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 17:41:59.977948  481631 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:42:00.021342  481631 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 17:42:00.021493  481631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:42:00.141384  481631 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-16 17:42:00.128290068 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:42:00.141514  481631 docker.go:295] overlay module found
	I0316 17:42:00.144252  481631 out.go:177] * Using the docker driver based on existing profile
	I0316 17:42:00.146362  481631 start.go:297] selected driver: docker
	I0316 17:42:00.146396  481631 start.go:901] validating driver "docker" against &{Name:old-k8s-version-746380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-746380 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:42:00.146526  481631 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:42:00.147348  481631 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:42:00.263731  481631 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-16 17:42:00.248972727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:42:00.264219  481631 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 17:42:00.264273  481631 cni.go:84] Creating CNI manager for ""
	I0316 17:42:00.264284  481631 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 17:42:00.264362  481631 start.go:340] cluster config:
	{Name:old-k8s-version-746380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-746380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:42:00.266711  481631 out.go:177] * Starting "old-k8s-version-746380" primary control-plane node in "old-k8s-version-746380" cluster
	I0316 17:42:00.268864  481631 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 17:42:00.270836  481631 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0316 17:42:00.273324  481631 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 17:42:00.273397  481631 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0316 17:42:00.273410  481631 cache.go:56] Caching tarball of preloaded images
	I0316 17:42:00.273516  481631 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 17:42:00.273859  481631 preload.go:173] Found /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0316 17:42:00.273879  481631 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0316 17:42:00.274007  481631 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/config.json ...
	I0316 17:42:00.320537  481631 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0316 17:42:00.320563  481631 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0316 17:42:00.320584  481631 cache.go:194] Successfully downloaded all kic artifacts
	I0316 17:42:00.320618  481631 start.go:360] acquireMachinesLock for old-k8s-version-746380: {Name:mk985aced683677a27cd625ddf73450fbe12b4d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 17:42:00.320692  481631 start.go:364] duration metric: took 52.562µs to acquireMachinesLock for "old-k8s-version-746380"
	I0316 17:42:00.320714  481631 start.go:96] Skipping create...Using existing machine configuration
	I0316 17:42:00.320721  481631 fix.go:54] fixHost starting: 
	I0316 17:42:00.321031  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:00.368190  481631 fix.go:112] recreateIfNeeded on old-k8s-version-746380: state=Stopped err=<nil>
	W0316 17:42:00.368222  481631 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 17:42:00.370666  481631 out.go:177] * Restarting existing docker container for "old-k8s-version-746380" ...
	I0316 17:42:00.372867  481631 cli_runner.go:164] Run: docker start old-k8s-version-746380
	I0316 17:42:00.865439  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:00.888731  481631 kic.go:430] container "old-k8s-version-746380" state is running.
	I0316 17:42:00.889111  481631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-746380
	I0316 17:42:00.918622  481631 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/config.json ...
	I0316 17:42:00.918883  481631 machine.go:94] provisionDockerMachine start ...
	I0316 17:42:00.918951  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:00.958494  481631 main.go:141] libmachine: Using SSH client type: native
	I0316 17:42:00.958784  481631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0316 17:42:00.958795  481631 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 17:42:00.959511  481631 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33442->127.0.0.1:33440: read: connection reset by peer
	I0316 17:42:04.103264  481631 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-746380
	
	I0316 17:42:04.103344  481631 ubuntu.go:169] provisioning hostname "old-k8s-version-746380"
	I0316 17:42:04.103448  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:04.127069  481631 main.go:141] libmachine: Using SSH client type: native
	I0316 17:42:04.127315  481631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0316 17:42:04.127327  481631 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-746380 && echo "old-k8s-version-746380" | sudo tee /etc/hostname
	I0316 17:42:04.289857  481631 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-746380
	
	I0316 17:42:04.289937  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:04.310450  481631 main.go:141] libmachine: Using SSH client type: native
	I0316 17:42:04.310716  481631 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0316 17:42:04.310742  481631 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-746380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-746380/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-746380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 17:42:04.471481  481631 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 17:42:04.471556  481631 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18277-280225/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-280225/.minikube}
	I0316 17:42:04.471630  481631 ubuntu.go:177] setting up certificates
	I0316 17:42:04.471667  481631 provision.go:84] configureAuth start
	I0316 17:42:04.471755  481631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-746380
	I0316 17:42:04.504770  481631 provision.go:143] copyHostCerts
	I0316 17:42:04.504840  481631 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem, removing ...
	I0316 17:42:04.504857  481631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem
	I0316 17:42:04.504940  481631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem (1078 bytes)
	I0316 17:42:04.505031  481631 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem, removing ...
	I0316 17:42:04.505037  481631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem
	I0316 17:42:04.505061  481631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem (1123 bytes)
	I0316 17:42:04.505114  481631 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem, removing ...
	I0316 17:42:04.505118  481631 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem
	I0316 17:42:04.505141  481631 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem (1675 bytes)
	I0316 17:42:04.505186  481631 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-746380 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-746380]
	I0316 17:42:04.670743  481631 provision.go:177] copyRemoteCerts
	I0316 17:42:04.670863  481631 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 17:42:04.670941  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:04.696073  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:04.797362  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0316 17:42:04.827286  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 17:42:04.858577  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0316 17:42:04.890030  481631 provision.go:87] duration metric: took 418.335172ms to configureAuth
	I0316 17:42:04.890058  481631 ubuntu.go:193] setting minikube options for container-runtime
	I0316 17:42:04.890262  481631 config.go:182] Loaded profile config "old-k8s-version-746380": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 17:42:04.890272  481631 machine.go:97] duration metric: took 3.971372099s to provisionDockerMachine
	I0316 17:42:04.890279  481631 start.go:293] postStartSetup for "old-k8s-version-746380" (driver="docker")
	I0316 17:42:04.890290  481631 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 17:42:04.890355  481631 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 17:42:04.890399  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:04.910572  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:05.028326  481631 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 17:42:05.033266  481631 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0316 17:42:05.033306  481631 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0316 17:42:05.033317  481631 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0316 17:42:05.033324  481631 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0316 17:42:05.033336  481631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/addons for local assets ...
	I0316 17:42:05.033395  481631 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/files for local assets ...
	I0316 17:42:05.033477  481631 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem -> 2856332.pem in /etc/ssl/certs
	I0316 17:42:05.033606  481631 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 17:42:05.044055  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem --> /etc/ssl/certs/2856332.pem (1708 bytes)
	I0316 17:42:05.074992  481631 start.go:296] duration metric: took 184.695146ms for postStartSetup
	I0316 17:42:05.075090  481631 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:42:05.075137  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:05.093728  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:05.193559  481631 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0316 17:42:05.199699  481631 fix.go:56] duration metric: took 4.878971113s for fixHost
	I0316 17:42:05.199767  481631 start.go:83] releasing machines lock for "old-k8s-version-746380", held for 4.879064618s
	I0316 17:42:05.199873  481631 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-746380
	I0316 17:42:05.229973  481631 ssh_runner.go:195] Run: cat /version.json
	I0316 17:42:05.230019  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:05.230273  481631 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 17:42:05.230321  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:05.266834  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:05.270825  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:05.506148  481631 ssh_runner.go:195] Run: systemctl --version
	I0316 17:42:05.512033  481631 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0316 17:42:05.521015  481631 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0316 17:42:05.541732  481631 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0316 17:42:05.541820  481631 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 17:42:05.552077  481631 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0316 17:42:05.552103  481631 start.go:494] detecting cgroup driver to use...
	I0316 17:42:05.552137  481631 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0316 17:42:05.552199  481631 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0316 17:42:05.570834  481631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0316 17:42:05.584229  481631 docker.go:217] disabling cri-docker service (if available) ...
	I0316 17:42:05.584301  481631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 17:42:05.597194  481631 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 17:42:05.608930  481631 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 17:42:05.701086  481631 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 17:42:05.781221  481631 docker.go:233] disabling docker service ...
	I0316 17:42:05.781294  481631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 17:42:05.794701  481631 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 17:42:05.806711  481631 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 17:42:05.936576  481631 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 17:42:06.056138  481631 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 17:42:06.071461  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 17:42:06.089357  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0316 17:42:06.101536  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0316 17:42:06.112899  481631 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0316 17:42:06.113007  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0316 17:42:06.123515  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 17:42:06.133962  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0316 17:42:06.144219  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 17:42:06.154141  481631 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 17:42:06.164357  481631 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0316 17:42:06.174785  481631 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 17:42:06.184210  481631 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 17:42:06.192984  481631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 17:42:06.274363  481631 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 17:42:06.446630  481631 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0316 17:42:06.446774  481631 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 17:42:06.450646  481631 start.go:562] Will wait 60s for crictl version
	I0316 17:42:06.450745  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:42:06.454115  481631 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 17:42:06.517143  481631 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0316 17:42:06.517256  481631 ssh_runner.go:195] Run: containerd --version
	I0316 17:42:06.547342  481631 ssh_runner.go:195] Run: containerd --version
	I0316 17:42:06.575851  481631 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0316 17:42:06.577767  481631 cli_runner.go:164] Run: docker network inspect old-k8s-version-746380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0316 17:42:06.593232  481631 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0316 17:42:06.596850  481631 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 17:42:06.607506  481631 kubeadm.go:877] updating cluster {Name:old-k8s-version-746380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-746380 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 17:42:06.607683  481631 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 17:42:06.607753  481631 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 17:42:06.644341  481631 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 17:42:06.644367  481631 containerd.go:519] Images already preloaded, skipping extraction
	I0316 17:42:06.644432  481631 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 17:42:06.684516  481631 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 17:42:06.684541  481631 cache_images.go:84] Images are preloaded, skipping loading
	I0316 17:42:06.684550  481631 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0316 17:42:06.684671  481631 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-746380 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-746380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 17:42:06.684754  481631 ssh_runner.go:195] Run: sudo crictl info
	I0316 17:42:06.730724  481631 cni.go:84] Creating CNI manager for ""
	I0316 17:42:06.730759  481631 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 17:42:06.730770  481631 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 17:42:06.730791  481631 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-746380 NodeName:old-k8s-version-746380 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 17:42:06.730928  481631 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-746380"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 17:42:06.731007  481631 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 17:42:06.740573  481631 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 17:42:06.740670  481631 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 17:42:06.751215  481631 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0316 17:42:06.769820  481631 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 17:42:06.788048  481631 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0316 17:42:06.807472  481631 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0316 17:42:06.811094  481631 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 17:42:06.821748  481631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 17:42:06.903012  481631 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 17:42:06.917358  481631 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380 for IP: 192.168.76.2
	I0316 17:42:06.917391  481631 certs.go:194] generating shared ca certs ...
	I0316 17:42:06.917408  481631 certs.go:226] acquiring lock for ca certs: {Name:mk6d455ecce74ad164a5c9d511b938033d09479f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:42:06.917582  481631 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key
	I0316 17:42:06.917630  481631 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key
	I0316 17:42:06.917639  481631 certs.go:256] generating profile certs ...
	I0316 17:42:06.917729  481631 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.key
	I0316 17:42:06.917797  481631 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/apiserver.key.2a1aaf52
	I0316 17:42:06.917841  481631 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/proxy-client.key
	I0316 17:42:06.917947  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633.pem (1338 bytes)
	W0316 17:42:06.917977  481631 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633_empty.pem, impossibly tiny 0 bytes
	I0316 17:42:06.917985  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem (1679 bytes)
	I0316 17:42:06.918007  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem (1078 bytes)
	I0316 17:42:06.918027  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem (1123 bytes)
	I0316 17:42:06.918048  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem (1675 bytes)
	I0316 17:42:06.918095  481631 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem (1708 bytes)
	I0316 17:42:06.918737  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 17:42:06.966657  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0316 17:42:06.995477  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 17:42:07.094701  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 17:42:07.123471  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 17:42:07.149003  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 17:42:07.174577  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 17:42:07.201093  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 17:42:07.226539  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem --> /usr/share/ca-certificates/2856332.pem (1708 bytes)
	I0316 17:42:07.251159  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 17:42:07.275247  481631 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633.pem --> /usr/share/ca-certificates/285633.pem (1338 bytes)
	I0316 17:42:07.299398  481631 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 17:42:07.317996  481631 ssh_runner.go:195] Run: openssl version
	I0316 17:42:07.323511  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2856332.pem && ln -fs /usr/share/ca-certificates/2856332.pem /etc/ssl/certs/2856332.pem"
	I0316 17:42:07.333278  481631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2856332.pem
	I0316 17:42:07.336759  481631 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:01 /usr/share/ca-certificates/2856332.pem
	I0316 17:42:07.336837  481631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2856332.pem
	I0316 17:42:07.343869  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2856332.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 17:42:07.353021  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 17:42:07.362700  481631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:42:07.366289  481631 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:42:07.366378  481631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:42:07.374480  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 17:42:07.383503  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285633.pem && ln -fs /usr/share/ca-certificates/285633.pem /etc/ssl/certs/285633.pem"
	I0316 17:42:07.393247  481631 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285633.pem
	I0316 17:42:07.396665  481631 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:01 /usr/share/ca-certificates/285633.pem
	I0316 17:42:07.396752  481631 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285633.pem
	I0316 17:42:07.403653  481631 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285633.pem /etc/ssl/certs/51391683.0"
	I0316 17:42:07.412429  481631 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 17:42:07.415915  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 17:42:07.423039  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 17:42:07.430112  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 17:42:07.437039  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 17:42:07.444390  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 17:42:07.451124  481631 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 17:42:07.458230  481631 kubeadm.go:391] StartCluster: {Name:old-k8s-version-746380 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-746380 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:42:07.458350  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0316 17:42:07.458429  481631 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 17:42:07.516261  481631 cri.go:89] found id: "a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c"
	I0316 17:42:07.516302  481631 cri.go:89] found id: "7ceacd2ed17423521a1815859afffde0ddd075bf9bec82e50bae88c12f2e99e5"
	I0316 17:42:07.516308  481631 cri.go:89] found id: "22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673"
	I0316 17:42:07.516311  481631 cri.go:89] found id: "ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b"
	I0316 17:42:07.516315  481631 cri.go:89] found id: "bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79"
	I0316 17:42:07.516318  481631 cri.go:89] found id: "3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b"
	I0316 17:42:07.516321  481631 cri.go:89] found id: "0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b"
	I0316 17:42:07.516324  481631 cri.go:89] found id: "53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955"
	I0316 17:42:07.516327  481631 cri.go:89] found id: ""
	I0316 17:42:07.516389  481631 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0316 17:42:07.529407  481631 cri.go:116] JSON = null
	W0316 17:42:07.529460  481631 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0316 17:42:07.529536  481631 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 17:42:07.538800  481631 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 17:42:07.538835  481631 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 17:42:07.538842  481631 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 17:42:07.538900  481631 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 17:42:07.553605  481631 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 17:42:07.554117  481631 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-746380" does not appear in /home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:42:07.554248  481631 kubeconfig.go:62] /home/jenkins/minikube-integration/18277-280225/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-746380" cluster setting kubeconfig missing "old-k8s-version-746380" context setting]
	I0316 17:42:07.554630  481631 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/kubeconfig: {Name:mk8864b14e2dcaa49893fcecc40453b6fe139389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:42:07.556475  481631 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 17:42:07.567929  481631 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0316 17:42:07.567971  481631 kubeadm.go:591] duration metric: took 29.124354ms to restartPrimaryControlPlane
	I0316 17:42:07.567980  481631 kubeadm.go:393] duration metric: took 109.759299ms to StartCluster
	I0316 17:42:07.567995  481631 settings.go:142] acquiring lock: {Name:mkcd5f7504890e5ae44ee0b7a2caa6ef5c6c8fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:42:07.568064  481631 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:42:07.568760  481631 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/kubeconfig: {Name:mk8864b14e2dcaa49893fcecc40453b6fe139389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:42:07.568995  481631 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 17:42:07.571966  481631 out.go:177] * Verifying Kubernetes components...
	I0316 17:42:07.569359  481631 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 17:42:07.569512  481631 config.go:182] Loaded profile config "old-k8s-version-746380": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 17:42:07.574387  481631 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-746380"
	I0316 17:42:07.574411  481631 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-746380"
	W0316 17:42:07.574419  481631 addons.go:243] addon storage-provisioner should already be in state true
	I0316 17:42:07.574429  481631 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 17:42:07.574459  481631 host.go:66] Checking if "old-k8s-version-746380" exists ...
	I0316 17:42:07.574544  481631 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-746380"
	I0316 17:42:07.574574  481631 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-746380"
	I0316 17:42:07.574895  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:07.574906  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:07.575271  481631 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-746380"
	I0316 17:42:07.575300  481631 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-746380"
	W0316 17:42:07.575307  481631 addons.go:243] addon metrics-server should already be in state true
	I0316 17:42:07.575335  481631 host.go:66] Checking if "old-k8s-version-746380" exists ...
	I0316 17:42:07.575357  481631 addons.go:69] Setting dashboard=true in profile "old-k8s-version-746380"
	I0316 17:42:07.575417  481631 addons.go:234] Setting addon dashboard=true in "old-k8s-version-746380"
	W0316 17:42:07.575449  481631 addons.go:243] addon dashboard should already be in state true
	I0316 17:42:07.575500  481631 host.go:66] Checking if "old-k8s-version-746380" exists ...
	I0316 17:42:07.575837  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:07.576184  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:07.613247  481631 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 17:42:07.621116  481631 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 17:42:07.621140  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 17:42:07.621205  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:07.637685  481631 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0316 17:42:07.643208  481631 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0316 17:42:07.640600  481631 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-746380"
	I0316 17:42:07.646929  481631 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 17:42:07.645275  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	W0316 17:42:07.645290  481631 addons.go:243] addon default-storageclass should already be in state true
	I0316 17:42:07.648912  481631 host.go:66] Checking if "old-k8s-version-746380" exists ...
	I0316 17:42:07.648938  481631 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 17:42:07.648955  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 17:42:07.649015  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:07.649377  481631 cli_runner.go:164] Run: docker container inspect old-k8s-version-746380 --format={{.State.Status}}
	I0316 17:42:07.649666  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0316 17:42:07.649730  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:07.695739  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:07.697095  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:07.724377  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:07.730493  481631 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:07.730513  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 17:42:07.730577  481631 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-746380
	I0316 17:42:07.740128  481631 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 17:42:07.763290  481631 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/old-k8s-version-746380/id_rsa Username:docker}
	I0316 17:42:07.785988  481631 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-746380" to be "Ready" ...
	I0316 17:42:07.852114  481631 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 17:42:07.852138  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 17:42:07.874441  481631 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 17:42:07.874509  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 17:42:07.877276  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 17:42:07.902505  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0316 17:42:07.902570  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0316 17:42:07.906205  481631 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 17:42:07.906271  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 17:42:07.923257  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:07.951835  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 17:42:07.980873  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0316 17:42:07.980902  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0316 17:42:08.073035  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0316 17:42:08.073063  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0316 17:42:08.136815  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.136853  481631 retry.go:31] will retry after 179.654788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.141049  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0316 17:42:08.141075  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0316 17:42:08.144650  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.144681  481631 retry.go:31] will retry after 190.103906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.160948  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0316 17:42:08.160974  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0316 17:42:08.182599  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0316 17:42:08.182625  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0316 17:42:08.202736  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0316 17:42:08.202769  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0316 17:42:08.215669  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.215705  481631 retry.go:31] will retry after 320.347104ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.223284  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0316 17:42:08.223314  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0316 17:42:08.242524  481631 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 17:42:08.242589  481631 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0316 17:42:08.261453  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 17:42:08.317232  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 17:42:08.335446  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0316 17:42:08.340646  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.340715  481631 retry.go:31] will retry after 128.141016ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:08.428854  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:08.428907  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.428935  481631 retry.go:31] will retry after 435.220523ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.428957  481631 retry.go:31] will retry after 287.290957ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.470052  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 17:42:08.536590  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:08.592022  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.592099  481631 retry.go:31] will retry after 280.350723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:08.634636  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.634675  481631 retry.go:31] will retry after 198.626244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.717005  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0316 17:42:08.795780  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.795813  481631 retry.go:31] will retry after 490.542891ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.834117  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 17:42:08.864589  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:08.872906  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0316 17:42:08.974766  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:08.974807  481631 retry.go:31] will retry after 714.630047ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:09.016327  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.016366  481631 retry.go:31] will retry after 754.198803ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:09.047216  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.047313  481631 retry.go:31] will retry after 416.687724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.286823  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0316 17:42:09.361371  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.361404  481631 retry.go:31] will retry after 980.284155ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.464614  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0316 17:42:09.574679  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.574795  481631 retry.go:31] will retry after 446.126299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.690107  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:09.765050  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.765084  481631 retry.go:31] will retry after 693.920365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.771269  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:09.786856  481631 node_ready.go:53] error getting node "old-k8s-version-746380": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-746380": dial tcp 192.168.76.2:8443: connect: connection refused
	W0316 17:42:09.846185  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:09.846218  481631 retry.go:31] will retry after 612.609863ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.021806  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0316 17:42:10.121711  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.121813  481631 retry.go:31] will retry after 1.192086834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.342840  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0316 17:42:10.418840  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.418877  481631 retry.go:31] will retry after 873.814223ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.460040  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:10.460326  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:10.639028  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.639062  481631 retry.go:31] will retry after 1.753084596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:10.639110  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:10.639122  481631 retry.go:31] will retry after 1.380638196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:11.293819  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 17:42:11.314130  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0316 17:42:11.374599  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:11.374634  481631 retry.go:31] will retry after 2.459510127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:11.408894  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:11.408928  481631 retry.go:31] will retry after 2.414414509s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:12.020870  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:12.105981  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:12.106012  481631 retry.go:31] will retry after 1.312651252s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:12.286756  481631 node_ready.go:53] error getting node "old-k8s-version-746380": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-746380": dial tcp 192.168.76.2:8443: connect: connection refused
	I0316 17:42:12.392990  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0316 17:42:12.474768  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:12.474801  481631 retry.go:31] will retry after 2.448875106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:13.419832  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:13.514403  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:13.514435  481631 retry.go:31] will retry after 1.571292698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:13.823691  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 17:42:13.835027  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0316 17:42:13.927148  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:13.927253  481631 retry.go:31] will retry after 1.841493824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0316 17:42:13.951547  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:13.951582  481631 retry.go:31] will retry after 3.059458316s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:14.287033  481631 node_ready.go:53] error getting node "old-k8s-version-746380": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-746380": dial tcp 192.168.76.2:8443: connect: connection refused
	I0316 17:42:14.924097  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0316 17:42:15.041804  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:15.041882  481631 retry.go:31] will retry after 1.528249633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:15.086043  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0316 17:42:15.172182  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:15.172217  481631 retry.go:31] will retry after 5.596650635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:15.769278  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0316 17:42:15.846741  481631 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:15.846784  481631 retry.go:31] will retry after 3.704135029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0316 17:42:16.287439  481631 node_ready.go:53] error getting node "old-k8s-version-746380": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-746380": dial tcp 192.168.76.2:8443: connect: connection refused
	I0316 17:42:16.570807  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0316 17:42:17.011273  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 17:42:19.551066  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 17:42:20.769264  481631 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 17:42:27.041687  481631 node_ready.go:49] node "old-k8s-version-746380" has status "Ready":"True"
	I0316 17:42:27.041727  481631 node_ready.go:38] duration metric: took 19.255705475s for node "old-k8s-version-746380" to be "Ready" ...
	I0316 17:42:27.041738  481631 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 17:42:27.263520  481631 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace to be "Ready" ...
	I0316 17:42:28.025210  481631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.454364666s)
	I0316 17:42:28.237051  481631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.225739368s)
	I0316 17:42:28.568185  481631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.798881412s)
	I0316 17:42:28.568225  481631 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-746380"
	I0316 17:42:28.568329  481631 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.017227242s)
	I0316 17:42:28.570417  481631 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-746380 addons enable metrics-server
	
	I0316 17:42:28.572655  481631 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0316 17:42:28.574595  481631 addons.go:505] duration metric: took 21.005237526s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0316 17:42:29.270360  481631 pod_ready.go:102] pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:31.774385  481631 pod_ready.go:102] pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:34.271184  481631 pod_ready.go:102] pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:36.796678  481631 pod_ready.go:102] pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:37.770464  481631 pod_ready.go:92] pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace has status "Ready":"True"
	I0316 17:42:37.770491  481631 pod_ready.go:81] duration metric: took 10.506937341s for pod "coredns-74ff55c5b-jcdh5" in "kube-system" namespace to be "Ready" ...
	I0316 17:42:37.770502  481631 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:42:39.776868  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:41.777472  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:44.276680  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:46.353053  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:48.781326  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:51.277604  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:53.277911  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:55.776446  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:42:57.776991  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:00.291704  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:02.777069  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:05.280299  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:07.777800  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:10.278020  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:12.279412  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:14.787181  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:17.276159  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:19.276440  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:21.276706  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:23.277636  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:25.280954  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:27.786071  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:30.278286  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:32.280975  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:34.782296  481631 pod_ready.go:102] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:36.776109  481631 pod_ready.go:92] pod "etcd-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"True"
	I0316 17:43:36.776134  481631 pod_ready.go:81] duration metric: took 59.005624808s for pod "etcd-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.776148  481631 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.780738  481631 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"True"
	I0316 17:43:36.780761  481631 pod_ready.go:81] duration metric: took 4.604681ms for pod "kube-apiserver-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.780772  481631 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.786250  481631 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"True"
	I0316 17:43:36.786272  481631 pod_ready.go:81] duration metric: took 5.493126ms for pod "kube-controller-manager-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.786284  481631 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x59w9" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.790728  481631 pod_ready.go:92] pod "kube-proxy-x59w9" in "kube-system" namespace has status "Ready":"True"
	I0316 17:43:36.790753  481631 pod_ready.go:81] duration metric: took 4.46069ms for pod "kube-proxy-x59w9" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:36.790767  481631 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:38.797163  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:41.296756  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:43.297411  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:45.305559  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:47.355391  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:49.797601  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:52.297729  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:54.299980  481631 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"False"
	I0316 17:43:56.796309  481631 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace has status "Ready":"True"
	I0316 17:43:56.796334  481631 pod_ready.go:81] duration metric: took 20.005558201s for pod "kube-scheduler-old-k8s-version-746380" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:56.796346  481631 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace to be "Ready" ...
	I0316 17:43:58.802818  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:00.803336  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:02.804530  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:05.305326  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:07.803816  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:10.302851  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:12.303186  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:14.802367  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:17.302476  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:19.802626  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:22.302400  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:24.803426  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:27.302582  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:29.303164  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:31.801470  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:33.802696  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:36.303247  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:38.802080  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:41.303181  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:43.802323  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:46.302735  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:48.803261  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:51.302213  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:53.302998  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:55.310097  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:44:57.803197  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:00.326411  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:02.802450  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:05.306732  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:07.803416  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:09.804172  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:11.808200  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:14.303108  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:16.802246  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:18.803476  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:21.302365  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:23.802941  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:26.302747  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:28.302779  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:30.304040  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:32.803081  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:35.303213  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:37.803002  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:40.302451  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:42.302822  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:44.302872  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:46.303660  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:48.332094  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:50.801958  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:53.302665  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:55.306059  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:45:57.802678  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:00.305845  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:02.803115  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:05.307191  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:07.803138  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:10.302326  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:12.802055  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:14.802305  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:17.302568  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:19.801973  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:21.802381  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:24.302135  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:26.303008  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:28.303158  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:30.303388  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:32.802327  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:35.307148  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:37.802277  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:39.802369  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:41.802623  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:44.302354  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:46.302633  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:48.302691  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:50.302964  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:52.802027  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:54.802966  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:57.302711  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:46:59.802974  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:02.302136  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:04.302338  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:06.801928  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:08.803138  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:11.303226  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:13.802762  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:15.802895  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:18.312261  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:20.802737  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:22.803858  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:25.303996  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:27.801968  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:29.802192  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:32.301977  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:34.302883  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:36.303193  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:38.802675  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:40.802912  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:42.805838  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:45.313942  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:47.804246  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:50.303876  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:52.814958  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:55.305590  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:56.801819  481631 pod_ready.go:81] duration metric: took 4m0.005459581s for pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace to be "Ready" ...
	E0316 17:47:56.801842  481631 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 17:47:56.801850  481631 pod_ready.go:38] duration metric: took 5m29.760095608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 17:47:56.801864  481631 api_server.go:52] waiting for apiserver process to appear ...
	I0316 17:47:56.801890  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0316 17:47:56.801947  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 17:47:56.846384  481631 cri.go:89] found id: "f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5"
	I0316 17:47:56.846404  481631 cri.go:89] found id: "0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b"
	I0316 17:47:56.846409  481631 cri.go:89] found id: ""
	I0316 17:47:56.846417  481631 logs.go:276] 2 containers: [f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5 0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b]
	I0316 17:47:56.846473  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.851464  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.855231  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0316 17:47:56.855291  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 17:47:56.900649  481631 cri.go:89] found id: "16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677"
	I0316 17:47:56.900668  481631 cri.go:89] found id: "53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955"
	I0316 17:47:56.900673  481631 cri.go:89] found id: ""
	I0316 17:47:56.900680  481631 logs.go:276] 2 containers: [16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677 53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955]
	I0316 17:47:56.900734  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.904641  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.908512  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0316 17:47:56.908579  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 17:47:56.958196  481631 cri.go:89] found id: "8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d"
	I0316 17:47:56.958221  481631 cri.go:89] found id: "a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c"
	I0316 17:47:56.958227  481631 cri.go:89] found id: ""
	I0316 17:47:56.958235  481631 logs.go:276] 2 containers: [8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c]
	I0316 17:47:56.958356  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.962503  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.966137  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0316 17:47:56.966233  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 17:47:57.030461  481631 cri.go:89] found id: "bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e"
	I0316 17:47:57.030487  481631 cri.go:89] found id: "bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79"
	I0316 17:47:57.030492  481631 cri.go:89] found id: ""
	I0316 17:47:57.030499  481631 logs.go:276] 2 containers: [bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79]
	I0316 17:47:57.030555  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.035500  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.040606  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0316 17:47:57.040685  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 17:47:57.088626  481631 cri.go:89] found id: "53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f"
	I0316 17:47:57.088650  481631 cri.go:89] found id: "ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b"
	I0316 17:47:57.088655  481631 cri.go:89] found id: ""
	I0316 17:47:57.088663  481631 logs.go:276] 2 containers: [53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b]
	I0316 17:47:57.088727  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.093520  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.097632  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 17:47:57.097706  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 17:47:57.145155  481631 cri.go:89] found id: "c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e"
	I0316 17:47:57.145180  481631 cri.go:89] found id: "3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b"
	I0316 17:47:57.145184  481631 cri.go:89] found id: ""
	I0316 17:47:57.145191  481631 logs.go:276] 2 containers: [c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e 3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b]
	I0316 17:47:57.145246  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.149767  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.154500  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0316 17:47:57.154575  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 17:47:57.209384  481631 cri.go:89] found id: "137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914"
	I0316 17:47:57.209408  481631 cri.go:89] found id: "22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673"
	I0316 17:47:57.209413  481631 cri.go:89] found id: ""
	I0316 17:47:57.209420  481631 logs.go:276] 2 containers: [137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914 22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673]
	I0316 17:47:57.209516  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.213954  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.217759  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0316 17:47:57.217870  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 17:47:57.275699  481631 cri.go:89] found id: "747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb"
	I0316 17:47:57.275725  481631 cri.go:89] found id: "c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf"
	I0316 17:47:57.275731  481631 cri.go:89] found id: ""
	I0316 17:47:57.275762  481631 logs.go:276] 2 containers: [747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf]
	I0316 17:47:57.275818  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.280001  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.283560  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 17:47:57.283668  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 17:47:57.325685  481631 cri.go:89] found id: "a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0"
	I0316 17:47:57.325707  481631 cri.go:89] found id: ""
	I0316 17:47:57.325715  481631 logs.go:276] 1 containers: [a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0]
	I0316 17:47:57.325806  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.335361  481631 logs.go:123] Gathering logs for coredns [8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d] ...
	I0316 17:47:57.335400  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d"
	I0316 17:47:57.386271  481631 logs.go:123] Gathering logs for coredns [a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c] ...
	I0316 17:47:57.386305  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c"
	I0316 17:47:57.438268  481631 logs.go:123] Gathering logs for kube-scheduler [bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79] ...
	I0316 17:47:57.438345  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79"
	I0316 17:47:57.492639  481631 logs.go:123] Gathering logs for kube-controller-manager [3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b] ...
	I0316 17:47:57.492672  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b"
	I0316 17:47:57.622160  481631 logs.go:123] Gathering logs for containerd ...
	I0316 17:47:57.622192  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0316 17:47:57.692049  481631 logs.go:123] Gathering logs for dmesg ...
	I0316 17:47:57.692087  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 17:47:57.712218  481631 logs.go:123] Gathering logs for kube-apiserver [0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b] ...
	I0316 17:47:57.712297  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b"
	I0316 17:47:57.824470  481631 logs.go:123] Gathering logs for kube-scheduler [bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e] ...
	I0316 17:47:57.824542  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e"
	I0316 17:47:57.885521  481631 logs.go:123] Gathering logs for kubernetes-dashboard [a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0] ...
	I0316 17:47:57.885543  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0"
	I0316 17:47:57.936692  481631 logs.go:123] Gathering logs for etcd [16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677] ...
	I0316 17:47:57.936718  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677"
	I0316 17:47:57.996243  481631 logs.go:123] Gathering logs for etcd [53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955] ...
	I0316 17:47:57.996310  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955"
	I0316 17:47:58.069420  481631 logs.go:123] Gathering logs for kindnet [137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914] ...
	I0316 17:47:58.069492  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914"
	I0316 17:47:58.140301  481631 logs.go:123] Gathering logs for kindnet [22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673] ...
	I0316 17:47:58.140388  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673"
	I0316 17:47:58.195782  481631 logs.go:123] Gathering logs for storage-provisioner [747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb] ...
	I0316 17:47:58.195811  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb"
	I0316 17:47:58.246786  481631 logs.go:123] Gathering logs for describe nodes ...
	I0316 17:47:58.246854  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 17:47:58.533319  481631 logs.go:123] Gathering logs for kube-proxy [53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f] ...
	I0316 17:47:58.533350  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f"
	I0316 17:47:58.581039  481631 logs.go:123] Gathering logs for kube-proxy [ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b] ...
	I0316 17:47:58.581076  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b"
	I0316 17:47:58.646782  481631 logs.go:123] Gathering logs for kube-controller-manager [c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e] ...
	I0316 17:47:58.646811  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e"
	I0316 17:47:58.728590  481631 logs.go:123] Gathering logs for storage-provisioner [c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf] ...
	I0316 17:47:58.728623  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf"
	I0316 17:47:58.804089  481631 logs.go:123] Gathering logs for container status ...
	I0316 17:47:58.804118  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 17:47:58.878284  481631 logs.go:123] Gathering logs for kubelet ...
	I0316 17:47:58.878312  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0316 17:47:58.937737  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973155     660 reflector.go:138] object-"kube-system"/"coredns-token-xmxt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-xmxt5" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.937979  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973450     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jg8md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jg8md" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938253  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973508     660 reflector.go:138] object-"default"/"default-token-v8zz6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v8zz6" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938518  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973560     660 reflector.go:138] object-"kube-system"/"metrics-server-token-dlrz5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dlrz5" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938727  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973614     660 reflector.go:138] object-"kube-system"/"kindnet-token-79qtr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-79qtr" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938945  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973660     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-nlbsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-nlbsf" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.939153  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973717     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.939370  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973769     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.949987  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:28 old-k8s-version-746380 kubelet[660]: E0316 17:42:28.869550     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.951000  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:29 old-k8s-version-746380 kubelet[660]: E0316 17:42:29.787727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.953849  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:43 old-k8s-version-746380 kubelet[660]: E0316 17:42:43.570004     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.955949  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:52 old-k8s-version-746380 kubelet[660]: E0316 17:42:52.887504     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.956321  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:53 old-k8s-version-746380 kubelet[660]: E0316 17:42:53.890811     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.956858  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:56 old-k8s-version-746380 kubelet[660]: E0316 17:42:56.557270     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.957184  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:58 old-k8s-version-746380 kubelet[660]: E0316 17:42:58.242491     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.957618  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:59 old-k8s-version-746380 kubelet[660]: E0316 17:42:59.906687     660 pod_workers.go:191] Error syncing pod 5f7e0b89-084e-48fe-9574-508fd681797d ("storage-provisioner_kube-system(5f7e0b89-084e-48fe-9574-508fd681797d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5f7e0b89-084e-48fe-9574-508fd681797d)"
	W0316 17:47:58.960398  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:10 old-k8s-version-746380 kubelet[660]: E0316 17:43:10.576290     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.961045  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:11 old-k8s-version-746380 kubelet[660]: E0316 17:43:11.935923     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.961498  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:18 old-k8s-version-746380 kubelet[660]: E0316 17:43:18.243039     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.961677  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:24 old-k8s-version-746380 kubelet[660]: E0316 17:43:24.557141     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.961999  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:30 old-k8s-version-746380 kubelet[660]: E0316 17:43:30.556868     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.962183  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:36 old-k8s-version-746380 kubelet[660]: E0316 17:43:36.556932     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.962768  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:42 old-k8s-version-746380 kubelet[660]: E0316 17:43:42.043309     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.963089  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:48 old-k8s-version-746380 kubelet[660]: E0316 17:43:48.242956     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.965567  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:51 old-k8s-version-746380 kubelet[660]: E0316 17:43:51.580010     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.965917  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:59 old-k8s-version-746380 kubelet[660]: E0316 17:43:59.556848     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.966130  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:06 old-k8s-version-746380 kubelet[660]: E0316 17:44:06.561902     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.966483  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:11 old-k8s-version-746380 kubelet[660]: E0316 17:44:11.559395     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.966688  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:18 old-k8s-version-746380 kubelet[660]: E0316 17:44:18.557203     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.967294  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:27 old-k8s-version-746380 kubelet[660]: E0316 17:44:27.155323     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.967653  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:28 old-k8s-version-746380 kubelet[660]: E0316 17:44:28.242526     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.967940  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:29 old-k8s-version-746380 kubelet[660]: E0316 17:44:29.560438     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.968151  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:42 old-k8s-version-746380 kubelet[660]: E0316 17:44:42.557018     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.968507  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:43 old-k8s-version-746380 kubelet[660]: E0316 17:44:43.556890     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.968713  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:53 old-k8s-version-746380 kubelet[660]: E0316 17:44:53.559894     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.969060  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:54 old-k8s-version-746380 kubelet[660]: E0316 17:44:54.556687     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.969408  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:06 old-k8s-version-746380 kubelet[660]: E0316 17:45:06.556647     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.969612  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:07 old-k8s-version-746380 kubelet[660]: E0316 17:45:07.558766     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.969957  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:21 old-k8s-version-746380 kubelet[660]: E0316 17:45:21.557302     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.972422  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:22 old-k8s-version-746380 kubelet[660]: E0316 17:45:22.564132     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.972631  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:33 old-k8s-version-746380 kubelet[660]: E0316 17:45:33.558085     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.973000  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:34 old-k8s-version-746380 kubelet[660]: E0316 17:45:34.556860     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.973352  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:45 old-k8s-version-746380 kubelet[660]: E0316 17:45:45.557389     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.973640  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:46 old-k8s-version-746380 kubelet[660]: E0316 17:45:46.567889     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.974263  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:57 old-k8s-version-746380 kubelet[660]: E0316 17:45:57.379216     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.974623  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:58 old-k8s-version-746380 kubelet[660]: E0316 17:45:58.382607     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.974831  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:58 old-k8s-version-746380 kubelet[660]: E0316 17:45:58.557215     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975036  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:10 old-k8s-version-746380 kubelet[660]: E0316 17:46:10.556961     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975387  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:13 old-k8s-version-746380 kubelet[660]: E0316 17:46:13.557158     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.975590  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:21 old-k8s-version-746380 kubelet[660]: E0316 17:46:21.557267     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975952  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:25 old-k8s-version-746380 kubelet[660]: E0316 17:46:25.557644     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.976163  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:34 old-k8s-version-746380 kubelet[660]: E0316 17:46:34.556981     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.976517  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:39 old-k8s-version-746380 kubelet[660]: E0316 17:46:39.559649     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.976723  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:49 old-k8s-version-746380 kubelet[660]: E0316 17:46:49.557008     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.977046  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:53 old-k8s-version-746380 kubelet[660]: E0316 17:46:53.557338     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.977225  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:04 old-k8s-version-746380 kubelet[660]: E0316 17:47:04.557220     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.977547  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:05 old-k8s-version-746380 kubelet[660]: E0316 17:47:05.557240     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.977868  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557143     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.978047  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557232     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.978383  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.560424     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.978563  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.978884  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.979063  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.979382  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.979563  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 17:47:58.979570  481631 logs.go:123] Gathering logs for kube-apiserver [f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5] ...
	I0316 17:47:58.979583  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5"
	I0316 17:47:59.125053  481631 out.go:304] Setting ErrFile to fd 2...
	I0316 17:47:59.125091  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0316 17:47:59.125167  481631 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0316 17:47:59.125176  481631 out.go:239]   Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:59.125185  481631 out.go:239]   Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	  Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:59.125193  481631 out.go:239]   Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:59.125239  481631 out.go:239]   Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	  Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:59.125249  481631 out.go:239]   Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 17:47:59.125261  481631 out.go:304] Setting ErrFile to fd 2...
	I0316 17:47:59.125272  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:48:09.125999  481631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:48:09.141689  481631 api_server.go:72] duration metric: took 6m1.572659285s to wait for apiserver process to appear ...
	I0316 17:48:09.141715  481631 api_server.go:88] waiting for apiserver healthz status ...
	I0316 17:48:09.144420  481631 out.go:177] 
	W0316 17:48:09.146305  481631 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0316 17:48:09.146326  481631 out.go:239] * 
	* 
	W0316 17:48:09.147294  481631 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 17:48:09.148762  481631 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-746380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-746380
helpers_test.go:235: (dbg) docker inspect old-k8s-version-746380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657",
	        "Created": "2024-03-16T17:38:54.559814149Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 481810,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-16T17:42:00.858127848Z",
	            "FinishedAt": "2024-03-16T17:41:59.099110284Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657/hostname",
	        "HostsPath": "/var/lib/docker/containers/3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657/hosts",
	        "LogPath": "/var/lib/docker/containers/3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657/3df96bd4b9a513ed385047c4bc628f54de78a740c0b01e71bbf8e603d0db7657-json.log",
	        "Name": "/old-k8s-version-746380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-746380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-746380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/593dc9613e021fdc448be608010898ac76ea854f055e8cba3c931a43285e6e11-init/diff:/var/lib/docker/overlay2/8d60f86c085005efdbad22ffe73f1ce0b89f9b32800c71896e407b2a86b69166/diff",
	                "MergedDir": "/var/lib/docker/overlay2/593dc9613e021fdc448be608010898ac76ea854f055e8cba3c931a43285e6e11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/593dc9613e021fdc448be608010898ac76ea854f055e8cba3c931a43285e6e11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/593dc9613e021fdc448be608010898ac76ea854f055e8cba3c931a43285e6e11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-746380",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-746380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-746380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-746380",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-746380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb8bb4b963a5334e1b7db5e50823c7afc7bbefb75133e1ab661206fd721fa505",
	            "SandboxKey": "/var/run/docker/netns/eb8bb4b963a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-746380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3df96bd4b9a5",
	                        "old-k8s-version-746380"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "b4e11c559a07d47b4de668fba703728b5545d960541591975a5d5d658b179fcb",
	                    "EndpointID": "7b541ff716b68ecf3626e9a9a903c5d9f80c7f1454e6fcdb1a51281e173aa16e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-746380",
	                        "3df96bd4b9a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-746380 -n old-k8s-version-746380
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-746380 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-746380 logs -n 25: (2.531436908s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-906495                              | cert-expiration-906495   | jenkins | v1.32.0 | 16 Mar 24 17:37 UTC | 16 Mar 24 17:38 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-682017                               | force-systemd-env-682017 | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-682017                            | force-systemd-env-682017 | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	| start   | -p cert-options-380412                                 | cert-options-380412      | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-380412 ssh                                | cert-options-380412      | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-380412 -- sudo                         | cert-options-380412      | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-380412                                 | cert-options-380412      | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:38 UTC |
	| start   | -p old-k8s-version-746380                              | old-k8s-version-746380   | jenkins | v1.32.0 | 16 Mar 24 17:38 UTC | 16 Mar 24 17:41 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-906495                              | cert-expiration-906495   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:41 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-906495                              | cert-expiration-906495   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:41 UTC |
	| start   | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:42 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-746380        | old-k8s-version-746380   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-746380                              | old-k8s-version-746380   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:41 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-746380             | old-k8s-version-746380   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC | 16 Mar 24 17:41 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-746380                              | old-k8s-version-746380   | jenkins | v1.32.0 | 16 Mar 24 17:41 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-308593             | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:42 UTC | 16 Mar 24 17:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:42 UTC | 16 Mar 24 17:42 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-308593                  | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:42 UTC | 16 Mar 24 17:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:42 UTC | 16 Mar 24 17:47 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	| image   | no-preload-308593 image list                           | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC | 16 Mar 24 17:47 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC | 16 Mar 24 17:47 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC | 16 Mar 24 17:47 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC | 16 Mar 24 17:47 UTC |
	| delete  | -p no-preload-308593                                   | no-preload-308593        | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC | 16 Mar 24 17:47 UTC |
	| start   | -p embed-certs-126148                                  | embed-certs-126148       | jenkins | v1.32.0 | 16 Mar 24 17:47 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 17:47:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 17:47:46.089160  491570 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:47:46.089949  491570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:47:46.090000  491570 out.go:304] Setting ErrFile to fd 2...
	I0316 17:47:46.090020  491570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:47:46.090434  491570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:47:46.091980  491570 out.go:298] Setting JSON to false
	I0316 17:47:46.095748  491570 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12612,"bootTime":1710598654,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 17:47:46.095871  491570 start.go:139] virtualization:  
	I0316 17:47:46.099235  491570 out.go:177] * [embed-certs-126148] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 17:47:46.101397  491570 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:47:46.103677  491570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:47:46.101549  491570 notify.go:220] Checking for updates...
	I0316 17:47:46.108497  491570 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:47:46.111148  491570 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 17:47:46.113255  491570 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 17:47:46.115215  491570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:47:46.118378  491570 config.go:182] Loaded profile config "old-k8s-version-746380": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 17:47:46.118477  491570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:47:46.139771  491570 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 17:47:46.139898  491570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:47:46.205117  491570 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-16 17:47:46.19461967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:47:46.205226  491570 docker.go:295] overlay module found
	I0316 17:47:46.208582  491570 out.go:177] * Using the docker driver based on user configuration
	I0316 17:47:46.210311  491570 start.go:297] selected driver: docker
	I0316 17:47:46.210326  491570 start.go:901] validating driver "docker" against <nil>
	I0316 17:47:46.210346  491570 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:47:46.211039  491570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:47:46.264775  491570 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-16 17:47:46.256049731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:47:46.265007  491570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 17:47:46.265255  491570 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 17:47:46.267512  491570 out.go:177] * Using Docker driver with root privileges
	I0316 17:47:46.269317  491570 cni.go:84] Creating CNI manager for ""
	I0316 17:47:46.269341  491570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 17:47:46.269352  491570 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0316 17:47:46.269436  491570 start.go:340] cluster config:
	{Name:embed-certs-126148 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-126148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:47:46.273027  491570 out.go:177] * Starting "embed-certs-126148" primary control-plane node in "embed-certs-126148" cluster
	I0316 17:47:46.274779  491570 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 17:47:46.276695  491570 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0316 17:47:46.278438  491570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 17:47:46.278482  491570 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 17:47:46.278491  491570 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0316 17:47:46.278502  491570 cache.go:56] Caching tarball of preloaded images
	I0316 17:47:46.278585  491570 preload.go:173] Found /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0316 17:47:46.278595  491570 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0316 17:47:46.278824  491570 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/config.json ...
	I0316 17:47:46.278856  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/config.json: {Name:mk0f07087c2fdffcc63e8e3772483f53d0e5d42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:46.295222  491570 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0316 17:47:46.295244  491570 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0316 17:47:46.295318  491570 cache.go:194] Successfully downloaded all kic artifacts
	I0316 17:47:46.295351  491570 start.go:360] acquireMachinesLock for embed-certs-126148: {Name:mk0e2a1c1540ab83d29859b84f6b13ae26b17a9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 17:47:46.295476  491570 start.go:364] duration metric: took 107.003µs to acquireMachinesLock for "embed-certs-126148"
	I0316 17:47:46.295502  491570 start.go:93] Provisioning new machine with config: &{Name:embed-certs-126148 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-126148 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 17:47:46.295589  491570 start.go:125] createHost starting for "" (driver="docker")
	I0316 17:47:45.313942  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:47.804246  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:46.298670  491570 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0316 17:47:46.299191  491570 start.go:159] libmachine.API.Create for "embed-certs-126148" (driver="docker")
	I0316 17:47:46.300233  491570 client.go:168] LocalClient.Create starting
	I0316 17:47:46.300378  491570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem
	I0316 17:47:46.300416  491570 main.go:141] libmachine: Decoding PEM data...
	I0316 17:47:46.300433  491570 main.go:141] libmachine: Parsing certificate...
	I0316 17:47:46.300485  491570 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem
	I0316 17:47:46.300568  491570 main.go:141] libmachine: Decoding PEM data...
	I0316 17:47:46.300610  491570 main.go:141] libmachine: Parsing certificate...
	I0316 17:47:46.302078  491570 cli_runner.go:164] Run: docker network inspect embed-certs-126148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0316 17:47:46.319267  491570 cli_runner.go:211] docker network inspect embed-certs-126148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0316 17:47:46.319357  491570 network_create.go:281] running [docker network inspect embed-certs-126148] to gather additional debugging logs...
	I0316 17:47:46.319372  491570 cli_runner.go:164] Run: docker network inspect embed-certs-126148
	W0316 17:47:46.337706  491570 cli_runner.go:211] docker network inspect embed-certs-126148 returned with exit code 1
	I0316 17:47:46.337734  491570 network_create.go:284] error running [docker network inspect embed-certs-126148]: docker network inspect embed-certs-126148: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-126148 not found
	I0316 17:47:46.337759  491570 network_create.go:286] output of [docker network inspect embed-certs-126148]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-126148 not found
	
	** /stderr **
	I0316 17:47:46.337852  491570 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0316 17:47:46.356689  491570 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5c9a54818105 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:67:2f:dd:3c} reservation:<nil>}
	I0316 17:47:46.357080  491570 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d82c29a3afed IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:ff:93:07:95} reservation:<nil>}
	I0316 17:47:46.357612  491570 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-50c055c19ff8 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:85:33:e8:ad} reservation:<nil>}
	I0316 17:47:46.358204  491570 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b4e11c559a07 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:a3:2e:e8:a5} reservation:<nil>}
	I0316 17:47:46.358823  491570 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025a1240}
	I0316 17:47:46.358852  491570 network_create.go:124] attempt to create docker network embed-certs-126148 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0316 17:47:46.358925  491570 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-126148 embed-certs-126148
	I0316 17:47:46.421147  491570 network_create.go:108] docker network embed-certs-126148 192.168.85.0/24 created
	I0316 17:47:46.421177  491570 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-126148" container
	I0316 17:47:46.421247  491570 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0316 17:47:46.437466  491570 cli_runner.go:164] Run: docker volume create embed-certs-126148 --label name.minikube.sigs.k8s.io=embed-certs-126148 --label created_by.minikube.sigs.k8s.io=true
	I0316 17:47:46.460344  491570 oci.go:103] Successfully created a docker volume embed-certs-126148
	I0316 17:47:46.460427  491570 cli_runner.go:164] Run: docker run --rm --name embed-certs-126148-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-126148 --entrypoint /usr/bin/test -v embed-certs-126148:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0316 17:47:47.135567  491570 oci.go:107] Successfully prepared a docker volume embed-certs-126148
	I0316 17:47:47.135644  491570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 17:47:47.135664  491570 kic.go:194] Starting extracting preloaded images to volume ...
	I0316 17:47:47.135767  491570 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-126148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0316 17:47:50.303876  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:52.814958  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:52.909655  491570 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-126148:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (5.773847416s)
	I0316 17:47:52.909686  491570 kic.go:203] duration metric: took 5.774018614s to extract preloaded images to volume ...
	W0316 17:47:52.909828  491570 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0316 17:47:52.909951  491570 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0316 17:47:52.968169  491570 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-126148 --name embed-certs-126148 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-126148 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-126148 --network embed-certs-126148 --ip 192.168.85.2 --volume embed-certs-126148:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0316 17:47:53.285983  491570 cli_runner.go:164] Run: docker container inspect embed-certs-126148 --format={{.State.Running}}
	I0316 17:47:53.310332  491570 cli_runner.go:164] Run: docker container inspect embed-certs-126148 --format={{.State.Status}}
	I0316 17:47:53.338451  491570 cli_runner.go:164] Run: docker exec embed-certs-126148 stat /var/lib/dpkg/alternatives/iptables
	I0316 17:47:53.410276  491570 oci.go:144] the created container "embed-certs-126148" has a running status.
	I0316 17:47:53.410315  491570 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa...
	I0316 17:47:53.912907  491570 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0316 17:47:53.948219  491570 cli_runner.go:164] Run: docker container inspect embed-certs-126148 --format={{.State.Status}}
	I0316 17:47:53.980235  491570 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0316 17:47:53.980256  491570 kic_runner.go:114] Args: [docker exec --privileged embed-certs-126148 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0316 17:47:54.062660  491570 cli_runner.go:164] Run: docker container inspect embed-certs-126148 --format={{.State.Status}}
	I0316 17:47:54.086525  491570 machine.go:94] provisionDockerMachine start ...
	I0316 17:47:54.086625  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:54.113976  491570 main.go:141] libmachine: Using SSH client type: native
	I0316 17:47:54.114262  491570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I0316 17:47:54.114273  491570 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 17:47:54.279741  491570 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-126148
	
	I0316 17:47:54.279768  491570 ubuntu.go:169] provisioning hostname "embed-certs-126148"
	I0316 17:47:54.279833  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:54.322300  491570 main.go:141] libmachine: Using SSH client type: native
	I0316 17:47:54.322575  491570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I0316 17:47:54.322595  491570 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-126148 && echo "embed-certs-126148" | sudo tee /etc/hostname
	I0316 17:47:54.480079  491570 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-126148
	
	I0316 17:47:54.480184  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:54.500096  491570 main.go:141] libmachine: Using SSH client type: native
	I0316 17:47:54.500347  491570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33450 <nil> <nil>}
	I0316 17:47:54.500370  491570 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-126148' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-126148/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-126148' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 17:47:54.647986  491570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 17:47:54.648015  491570 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18277-280225/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-280225/.minikube}
	I0316 17:47:54.648042  491570 ubuntu.go:177] setting up certificates
	I0316 17:47:54.648063  491570 provision.go:84] configureAuth start
	I0316 17:47:54.648124  491570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-126148
	I0316 17:47:54.667147  491570 provision.go:143] copyHostCerts
	I0316 17:47:54.667221  491570 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem, removing ...
	I0316 17:47:54.667236  491570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem
	I0316 17:47:54.667313  491570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/cert.pem (1123 bytes)
	I0316 17:47:54.667407  491570 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem, removing ...
	I0316 17:47:54.667417  491570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem
	I0316 17:47:54.667444  491570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/key.pem (1675 bytes)
	I0316 17:47:54.667506  491570 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem, removing ...
	I0316 17:47:54.667516  491570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem
	I0316 17:47:54.667541  491570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-280225/.minikube/ca.pem (1078 bytes)
	I0316 17:47:54.667590  491570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem org=jenkins.embed-certs-126148 san=[127.0.0.1 192.168.85.2 embed-certs-126148 localhost minikube]
	I0316 17:47:55.161665  491570 provision.go:177] copyRemoteCerts
	I0316 17:47:55.161764  491570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 17:47:55.161825  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:55.179743  491570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa Username:docker}
	I0316 17:47:55.280629  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0316 17:47:55.319945  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 17:47:55.344866  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 17:47:55.369706  491570 provision.go:87] duration metric: took 721.628169ms to configureAuth
	I0316 17:47:55.369736  491570 ubuntu.go:193] setting minikube options for container-runtime
	I0316 17:47:55.369918  491570 config.go:182] Loaded profile config "embed-certs-126148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:47:55.369934  491570 machine.go:97] duration metric: took 1.283385449s to provisionDockerMachine
	I0316 17:47:55.369940  491570 client.go:171] duration metric: took 9.069693389s to LocalClient.Create
	I0316 17:47:55.369959  491570 start.go:167] duration metric: took 9.070768531s to libmachine.API.Create "embed-certs-126148"
	I0316 17:47:55.369969  491570 start.go:293] postStartSetup for "embed-certs-126148" (driver="docker")
	I0316 17:47:55.369979  491570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 17:47:55.370037  491570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 17:47:55.370088  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:55.385909  491570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa Username:docker}
	I0316 17:47:55.484656  491570 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 17:47:55.487783  491570 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0316 17:47:55.487823  491570 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0316 17:47:55.487834  491570 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0316 17:47:55.487841  491570 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0316 17:47:55.487852  491570 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/addons for local assets ...
	I0316 17:47:55.487912  491570 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-280225/.minikube/files for local assets ...
	I0316 17:47:55.487998  491570 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem -> 2856332.pem in /etc/ssl/certs
	I0316 17:47:55.488106  491570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 17:47:55.496886  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem --> /etc/ssl/certs/2856332.pem (1708 bytes)
	I0316 17:47:55.522057  491570 start.go:296] duration metric: took 152.072423ms for postStartSetup
	I0316 17:47:55.522461  491570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-126148
	I0316 17:47:55.537875  491570 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/config.json ...
	I0316 17:47:55.538158  491570 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:47:55.538251  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:55.554216  491570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa Username:docker}
	I0316 17:47:55.656489  491570 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0316 17:47:55.661555  491570 start.go:128] duration metric: took 9.365914328s to createHost
	I0316 17:47:55.661578  491570 start.go:83] releasing machines lock for "embed-certs-126148", held for 9.366092674s
	I0316 17:47:55.661649  491570 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-126148
	I0316 17:47:55.677812  491570 ssh_runner.go:195] Run: cat /version.json
	I0316 17:47:55.677864  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:55.678174  491570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 17:47:55.678223  491570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-126148
	I0316 17:47:55.696456  491570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa Username:docker}
	I0316 17:47:55.709105  491570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/embed-certs-126148/id_rsa Username:docker}
	I0316 17:47:55.924986  491570 ssh_runner.go:195] Run: systemctl --version
	I0316 17:47:55.929976  491570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0316 17:47:55.934726  491570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0316 17:47:55.961758  491570 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0316 17:47:55.961875  491570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 17:47:55.991404  491570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0316 17:47:55.991429  491570 start.go:494] detecting cgroup driver to use...
	I0316 17:47:55.991496  491570 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0316 17:47:55.991573  491570 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0316 17:47:56.007569  491570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0316 17:47:56.021020  491570 docker.go:217] disabling cri-docker service (if available) ...
	I0316 17:47:56.021115  491570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 17:47:56.035546  491570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 17:47:56.052762  491570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 17:47:56.144966  491570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 17:47:56.232513  491570 docker.go:233] disabling docker service ...
	I0316 17:47:56.232609  491570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 17:47:56.257826  491570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 17:47:56.270033  491570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 17:47:56.365339  491570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 17:47:56.463236  491570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 17:47:56.475374  491570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 17:47:56.492237  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0316 17:47:56.502575  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0316 17:47:56.515736  491570 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0316 17:47:56.515811  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0316 17:47:56.529830  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 17:47:56.540174  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0316 17:47:56.551142  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 17:47:56.563706  491570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 17:47:56.574223  491570 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0316 17:47:56.584074  491570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 17:47:56.592715  491570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 17:47:56.601606  491570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 17:47:56.691983  491570 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 17:47:56.842720  491570 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0316 17:47:56.842793  491570 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 17:47:56.849973  491570 start.go:562] Will wait 60s for crictl version
	I0316 17:47:56.850080  491570 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.856458  491570 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 17:47:56.910850  491570 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0316 17:47:56.910915  491570 ssh_runner.go:195] Run: containerd --version
	I0316 17:47:56.942035  491570 ssh_runner.go:195] Run: containerd --version
	I0316 17:47:56.975254  491570 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0316 17:47:55.305590  481631 pod_ready.go:102] pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace has status "Ready":"False"
	I0316 17:47:56.801819  481631 pod_ready.go:81] duration metric: took 4m0.005459581s for pod "metrics-server-9975d5f86-s65lt" in "kube-system" namespace to be "Ready" ...
	E0316 17:47:56.801842  481631 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 17:47:56.801850  481631 pod_ready.go:38] duration metric: took 5m29.760095608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 17:47:56.801864  481631 api_server.go:52] waiting for apiserver process to appear ...
	I0316 17:47:56.801890  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0316 17:47:56.801947  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 17:47:56.846384  481631 cri.go:89] found id: "f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5"
	I0316 17:47:56.846404  481631 cri.go:89] found id: "0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b"
	I0316 17:47:56.846409  481631 cri.go:89] found id: ""
	I0316 17:47:56.846417  481631 logs.go:276] 2 containers: [f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5 0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b]
	I0316 17:47:56.846473  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.851464  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.855231  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0316 17:47:56.855291  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 17:47:56.900649  481631 cri.go:89] found id: "16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677"
	I0316 17:47:56.900668  481631 cri.go:89] found id: "53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955"
	I0316 17:47:56.900673  481631 cri.go:89] found id: ""
	I0316 17:47:56.900680  481631 logs.go:276] 2 containers: [16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677 53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955]
	I0316 17:47:56.900734  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.904641  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.908512  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0316 17:47:56.908579  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 17:47:56.958196  481631 cri.go:89] found id: "8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d"
	I0316 17:47:56.958221  481631 cri.go:89] found id: "a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c"
	I0316 17:47:56.958227  481631 cri.go:89] found id: ""
	I0316 17:47:56.958235  481631 logs.go:276] 2 containers: [8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c]
	I0316 17:47:56.958356  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.962503  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:56.966137  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0316 17:47:56.966233  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 17:47:57.030461  481631 cri.go:89] found id: "bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e"
	I0316 17:47:57.030487  481631 cri.go:89] found id: "bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79"
	I0316 17:47:57.030492  481631 cri.go:89] found id: ""
	I0316 17:47:57.030499  481631 logs.go:276] 2 containers: [bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79]
	I0316 17:47:57.030555  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.035500  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.040606  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0316 17:47:57.040685  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 17:47:57.088626  481631 cri.go:89] found id: "53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f"
	I0316 17:47:57.088650  481631 cri.go:89] found id: "ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b"
	I0316 17:47:57.088655  481631 cri.go:89] found id: ""
	I0316 17:47:57.088663  481631 logs.go:276] 2 containers: [53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b]
	I0316 17:47:57.088727  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.093520  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.097632  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 17:47:57.097706  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 17:47:57.145155  481631 cri.go:89] found id: "c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e"
	I0316 17:47:57.145180  481631 cri.go:89] found id: "3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b"
	I0316 17:47:57.145184  481631 cri.go:89] found id: ""
	I0316 17:47:57.145191  481631 logs.go:276] 2 containers: [c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e 3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b]
	I0316 17:47:57.145246  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.149767  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.154500  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0316 17:47:57.154575  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 17:47:57.209384  481631 cri.go:89] found id: "137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914"
	I0316 17:47:57.209408  481631 cri.go:89] found id: "22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673"
	I0316 17:47:57.209413  481631 cri.go:89] found id: ""
	I0316 17:47:57.209420  481631 logs.go:276] 2 containers: [137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914 22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673]
	I0316 17:47:57.209516  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.213954  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.217759  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0316 17:47:57.217870  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 17:47:57.275699  481631 cri.go:89] found id: "747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb"
	I0316 17:47:57.275725  481631 cri.go:89] found id: "c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf"
	I0316 17:47:57.275731  481631 cri.go:89] found id: ""
	I0316 17:47:57.275762  481631 logs.go:276] 2 containers: [747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf]
	I0316 17:47:57.275818  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.280001  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.283560  481631 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 17:47:57.283668  481631 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 17:47:57.325685  481631 cri.go:89] found id: "a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0"
	I0316 17:47:57.325707  481631 cri.go:89] found id: ""
	I0316 17:47:57.325715  481631 logs.go:276] 1 containers: [a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0]
	I0316 17:47:57.325806  481631 ssh_runner.go:195] Run: which crictl
	I0316 17:47:57.335361  481631 logs.go:123] Gathering logs for coredns [8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d] ...
	I0316 17:47:57.335400  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d"
	I0316 17:47:57.386271  481631 logs.go:123] Gathering logs for coredns [a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c] ...
	I0316 17:47:57.386305  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c"
	I0316 17:47:57.438268  481631 logs.go:123] Gathering logs for kube-scheduler [bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79] ...
	I0316 17:47:57.438345  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79"
	I0316 17:47:57.492639  481631 logs.go:123] Gathering logs for kube-controller-manager [3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b] ...
	I0316 17:47:57.492672  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b"
	I0316 17:47:57.622160  481631 logs.go:123] Gathering logs for containerd ...
	I0316 17:47:57.622192  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0316 17:47:57.692049  481631 logs.go:123] Gathering logs for dmesg ...
	I0316 17:47:57.692087  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 17:47:57.712218  481631 logs.go:123] Gathering logs for kube-apiserver [0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b] ...
	I0316 17:47:57.712297  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b"
	I0316 17:47:57.824470  481631 logs.go:123] Gathering logs for kube-scheduler [bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e] ...
	I0316 17:47:57.824542  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e"
	I0316 17:47:57.885521  481631 logs.go:123] Gathering logs for kubernetes-dashboard [a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0] ...
	I0316 17:47:57.885543  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0"
	I0316 17:47:57.936692  481631 logs.go:123] Gathering logs for etcd [16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677] ...
	I0316 17:47:57.936718  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677"
	I0316 17:47:57.996243  481631 logs.go:123] Gathering logs for etcd [53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955] ...
	I0316 17:47:57.996310  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955"
	I0316 17:47:58.069420  481631 logs.go:123] Gathering logs for kindnet [137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914] ...
	I0316 17:47:58.069492  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914"
	I0316 17:47:58.140301  481631 logs.go:123] Gathering logs for kindnet [22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673] ...
	I0316 17:47:58.140388  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673"
	I0316 17:47:58.195782  481631 logs.go:123] Gathering logs for storage-provisioner [747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb] ...
	I0316 17:47:58.195811  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb"
	I0316 17:47:58.246786  481631 logs.go:123] Gathering logs for describe nodes ...
	I0316 17:47:58.246854  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 17:47:58.533319  481631 logs.go:123] Gathering logs for kube-proxy [53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f] ...
	I0316 17:47:58.533350  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f"
	I0316 17:47:58.581039  481631 logs.go:123] Gathering logs for kube-proxy [ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b] ...
	I0316 17:47:58.581076  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b"
	I0316 17:47:58.646782  481631 logs.go:123] Gathering logs for kube-controller-manager [c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e] ...
	I0316 17:47:58.646811  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e"
	I0316 17:47:58.728590  481631 logs.go:123] Gathering logs for storage-provisioner [c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf] ...
	I0316 17:47:58.728623  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf"
	I0316 17:47:58.804089  481631 logs.go:123] Gathering logs for container status ...
	I0316 17:47:58.804118  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 17:47:58.878284  481631 logs.go:123] Gathering logs for kubelet ...
	I0316 17:47:58.878312  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0316 17:47:58.937737  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973155     660 reflector.go:138] object-"kube-system"/"coredns-token-xmxt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-xmxt5" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.937979  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973450     660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jg8md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jg8md" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938253  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973508     660 reflector.go:138] object-"default"/"default-token-v8zz6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-v8zz6" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938518  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973560     660 reflector.go:138] object-"kube-system"/"metrics-server-token-dlrz5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dlrz5" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938727  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973614     660 reflector.go:138] object-"kube-system"/"kindnet-token-79qtr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-79qtr" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.938945  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973660     660 reflector.go:138] object-"kube-system"/"kube-proxy-token-nlbsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-nlbsf" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.939153  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973717     660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.939370  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:26 old-k8s-version-746380 kubelet[660]: E0316 17:42:26.973769     660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-746380" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-746380' and this object
	W0316 17:47:58.949987  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:28 old-k8s-version-746380 kubelet[660]: E0316 17:42:28.869550     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.951000  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:29 old-k8s-version-746380 kubelet[660]: E0316 17:42:29.787727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.953849  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:43 old-k8s-version-746380 kubelet[660]: E0316 17:42:43.570004     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.955949  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:52 old-k8s-version-746380 kubelet[660]: E0316 17:42:52.887504     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.956321  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:53 old-k8s-version-746380 kubelet[660]: E0316 17:42:53.890811     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.956858  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:56 old-k8s-version-746380 kubelet[660]: E0316 17:42:56.557270     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.957184  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:58 old-k8s-version-746380 kubelet[660]: E0316 17:42:58.242491     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.957618  481631 logs.go:138] Found kubelet problem: Mar 16 17:42:59 old-k8s-version-746380 kubelet[660]: E0316 17:42:59.906687     660 pod_workers.go:191] Error syncing pod 5f7e0b89-084e-48fe-9574-508fd681797d ("storage-provisioner_kube-system(5f7e0b89-084e-48fe-9574-508fd681797d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5f7e0b89-084e-48fe-9574-508fd681797d)"
	W0316 17:47:58.960398  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:10 old-k8s-version-746380 kubelet[660]: E0316 17:43:10.576290     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.961045  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:11 old-k8s-version-746380 kubelet[660]: E0316 17:43:11.935923     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.961498  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:18 old-k8s-version-746380 kubelet[660]: E0316 17:43:18.243039     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.961677  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:24 old-k8s-version-746380 kubelet[660]: E0316 17:43:24.557141     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.961999  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:30 old-k8s-version-746380 kubelet[660]: E0316 17:43:30.556868     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.962183  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:36 old-k8s-version-746380 kubelet[660]: E0316 17:43:36.556932     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.962768  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:42 old-k8s-version-746380 kubelet[660]: E0316 17:43:42.043309     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.963089  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:48 old-k8s-version-746380 kubelet[660]: E0316 17:43:48.242956     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.965567  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:51 old-k8s-version-746380 kubelet[660]: E0316 17:43:51.580010     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.965917  481631 logs.go:138] Found kubelet problem: Mar 16 17:43:59 old-k8s-version-746380 kubelet[660]: E0316 17:43:59.556848     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.966130  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:06 old-k8s-version-746380 kubelet[660]: E0316 17:44:06.561902     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.966483  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:11 old-k8s-version-746380 kubelet[660]: E0316 17:44:11.559395     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.966688  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:18 old-k8s-version-746380 kubelet[660]: E0316 17:44:18.557203     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.967294  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:27 old-k8s-version-746380 kubelet[660]: E0316 17:44:27.155323     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.967653  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:28 old-k8s-version-746380 kubelet[660]: E0316 17:44:28.242526     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.967940  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:29 old-k8s-version-746380 kubelet[660]: E0316 17:44:29.560438     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.968151  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:42 old-k8s-version-746380 kubelet[660]: E0316 17:44:42.557018     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.968507  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:43 old-k8s-version-746380 kubelet[660]: E0316 17:44:43.556890     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.968713  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:53 old-k8s-version-746380 kubelet[660]: E0316 17:44:53.559894     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.969060  481631 logs.go:138] Found kubelet problem: Mar 16 17:44:54 old-k8s-version-746380 kubelet[660]: E0316 17:44:54.556687     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.969408  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:06 old-k8s-version-746380 kubelet[660]: E0316 17:45:06.556647     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.969612  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:07 old-k8s-version-746380 kubelet[660]: E0316 17:45:07.558766     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.969957  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:21 old-k8s-version-746380 kubelet[660]: E0316 17:45:21.557302     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.972422  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:22 old-k8s-version-746380 kubelet[660]: E0316 17:45:22.564132     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0316 17:47:58.972631  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:33 old-k8s-version-746380 kubelet[660]: E0316 17:45:33.558085     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.973000  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:34 old-k8s-version-746380 kubelet[660]: E0316 17:45:34.556860     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.973352  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:45 old-k8s-version-746380 kubelet[660]: E0316 17:45:45.557389     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.973640  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:46 old-k8s-version-746380 kubelet[660]: E0316 17:45:46.567889     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.974263  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:57 old-k8s-version-746380 kubelet[660]: E0316 17:45:57.379216     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.974623  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:58 old-k8s-version-746380 kubelet[660]: E0316 17:45:58.382607     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.974831  481631 logs.go:138] Found kubelet problem: Mar 16 17:45:58 old-k8s-version-746380 kubelet[660]: E0316 17:45:58.557215     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975036  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:10 old-k8s-version-746380 kubelet[660]: E0316 17:46:10.556961     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975387  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:13 old-k8s-version-746380 kubelet[660]: E0316 17:46:13.557158     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.975590  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:21 old-k8s-version-746380 kubelet[660]: E0316 17:46:21.557267     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.975952  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:25 old-k8s-version-746380 kubelet[660]: E0316 17:46:25.557644     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.976163  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:34 old-k8s-version-746380 kubelet[660]: E0316 17:46:34.556981     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.976517  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:39 old-k8s-version-746380 kubelet[660]: E0316 17:46:39.559649     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.976723  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:49 old-k8s-version-746380 kubelet[660]: E0316 17:46:49.557008     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.977046  481631 logs.go:138] Found kubelet problem: Mar 16 17:46:53 old-k8s-version-746380 kubelet[660]: E0316 17:46:53.557338     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.977225  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:04 old-k8s-version-746380 kubelet[660]: E0316 17:47:04.557220     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.977547  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:05 old-k8s-version-746380 kubelet[660]: E0316 17:47:05.557240     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.977868  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557143     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.978047  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557232     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.978383  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.560424     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.978563  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.978884  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.979063  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:58.979382  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:58.979563  481631 logs.go:138] Found kubelet problem: Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 17:47:58.979570  481631 logs.go:123] Gathering logs for kube-apiserver [f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5] ...
	I0316 17:47:58.979583  481631 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5"
	I0316 17:47:59.125053  481631 out.go:304] Setting ErrFile to fd 2...
	I0316 17:47:59.125091  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0316 17:47:59.125167  481631 out.go:239] X Problems detected in kubelet:
	W0316 17:47:59.125176  481631 out.go:239]   Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:59.125185  481631 out.go:239]   Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:59.125193  481631 out.go:239]   Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 17:47:59.125239  481631 out.go:239]   Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	W0316 17:47:59.125249  481631 out.go:239]   Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 17:47:59.125261  481631 out.go:304] Setting ErrFile to fd 2...
	I0316 17:47:59.125272  481631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:47:56.977041  491570 cli_runner.go:164] Run: docker network inspect embed-certs-126148 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0316 17:47:56.994512  491570 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0316 17:47:56.999303  491570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 17:47:57.019696  491570 kubeadm.go:877] updating cluster {Name:embed-certs-126148 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-126148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 17:47:57.019827  491570 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 17:47:57.019915  491570 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 17:47:57.077558  491570 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 17:47:57.077580  491570 containerd.go:519] Images already preloaded, skipping extraction
	I0316 17:47:57.077649  491570 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 17:47:57.133213  491570 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 17:47:57.133233  491570 cache_images.go:84] Images are preloaded, skipping loading
	I0316 17:47:57.133241  491570 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.28.4 containerd true true} ...
	I0316 17:47:57.133335  491570 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-126148 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-126148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 17:47:57.133403  491570 ssh_runner.go:195] Run: sudo crictl info
	I0316 17:47:57.187761  491570 cni.go:84] Creating CNI manager for ""
	I0316 17:47:57.187833  491570 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 17:47:57.187859  491570 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 17:47:57.187902  491570 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-126148 NodeName:embed-certs-126148 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 17:47:57.188056  491570 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-126148"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 17:47:57.188146  491570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 17:47:57.199396  491570 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 17:47:57.199461  491570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 17:47:57.209401  491570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 17:47:57.231714  491570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 17:47:57.253246  491570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 17:47:57.274189  491570 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0316 17:47:57.278452  491570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 17:47:57.291414  491570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 17:47:57.407058  491570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 17:47:57.434428  491570 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148 for IP: 192.168.85.2
	I0316 17:47:57.434484  491570 certs.go:194] generating shared ca certs ...
	I0316 17:47:57.434514  491570 certs.go:226] acquiring lock for ca certs: {Name:mk6d455ecce74ad164a5c9d511b938033d09479f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:57.434658  491570 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key
	I0316 17:47:57.434749  491570 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key
	I0316 17:47:57.434784  491570 certs.go:256] generating profile certs ...
	I0316 17:47:57.434860  491570 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.key
	I0316 17:47:57.434892  491570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.crt with IP's: []
	I0316 17:47:58.239924  491570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.crt ...
	I0316 17:47:58.239958  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.crt: {Name:mk8c577b3361820b2fc7bb62eaff66babd82ad7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:58.240769  491570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.key ...
	I0316 17:47:58.240789  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/client.key: {Name:mke32d2a4481f6429eb296745fccb71bd0d6bdf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:58.240909  491570 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key.6b08b102
	I0316 17:47:58.240930  491570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt.6b08b102 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0316 17:47:58.581474  491570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt.6b08b102 ...
	I0316 17:47:58.581518  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt.6b08b102: {Name:mkbff8a1083c4ab9c0b19af81274692e40b4f1ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:58.581701  491570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key.6b08b102 ...
	I0316 17:47:58.581739  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key.6b08b102: {Name:mke59123d5a6aaf7b087bbe74858f403d08edb3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:58.581845  491570 certs.go:381] copying /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt.6b08b102 -> /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt
	I0316 17:47:58.582039  491570 certs.go:385] copying /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key.6b08b102 -> /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key
	I0316 17:47:58.582140  491570 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.key
	I0316 17:47:58.582176  491570 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.crt with IP's: []
	I0316 17:47:59.578910  491570 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.crt ...
	I0316 17:47:59.578987  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.crt: {Name:mk815d2cad2be4f2c38ddacc512c9fd92f0ae508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:59.579199  491570 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.key ...
	I0316 17:47:59.579253  491570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.key: {Name:mkd24c9dc6e99a7f1d1781091995932a1a11d177 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 17:47:59.579492  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633.pem (1338 bytes)
	W0316 17:47:59.579591  491570 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633_empty.pem, impossibly tiny 0 bytes
	I0316 17:47:59.579656  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca-key.pem (1679 bytes)
	I0316 17:47:59.579716  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/ca.pem (1078 bytes)
	I0316 17:47:59.579779  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/cert.pem (1123 bytes)
	I0316 17:47:59.579840  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/certs/key.pem (1675 bytes)
	I0316 17:47:59.579909  491570 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem (1708 bytes)
	I0316 17:47:59.580600  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 17:47:59.608231  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0316 17:47:59.642962  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 17:47:59.669169  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 17:47:59.699661  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 17:47:59.727238  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 17:47:59.753907  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 17:47:59.782256  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/embed-certs-126148/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 17:47:59.811894  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/ssl/certs/2856332.pem --> /usr/share/ca-certificates/2856332.pem (1708 bytes)
	I0316 17:47:59.838601  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 17:47:59.870341  491570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-280225/.minikube/certs/285633.pem --> /usr/share/ca-certificates/285633.pem (1338 bytes)
	I0316 17:47:59.896056  491570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 17:47:59.915413  491570 ssh_runner.go:195] Run: openssl version
	I0316 17:47:59.921343  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2856332.pem && ln -fs /usr/share/ca-certificates/2856332.pem /etc/ssl/certs/2856332.pem"
	I0316 17:47:59.932117  491570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2856332.pem
	I0316 17:47:59.935935  491570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:01 /usr/share/ca-certificates/2856332.pem
	I0316 17:47:59.936002  491570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2856332.pem
	I0316 17:47:59.943163  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2856332.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 17:47:59.953505  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 17:47:59.963236  491570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:47:59.966960  491570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:47:59.967027  491570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 17:47:59.974780  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 17:47:59.984658  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/285633.pem && ln -fs /usr/share/ca-certificates/285633.pem /etc/ssl/certs/285633.pem"
	I0316 17:47:59.993990  491570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/285633.pem
	I0316 17:47:59.997467  491570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:01 /usr/share/ca-certificates/285633.pem
	I0316 17:47:59.997535  491570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/285633.pem
	I0316 17:48:00.009615  491570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/285633.pem /etc/ssl/certs/51391683.0"
	I0316 17:48:00.039761  491570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 17:48:00.059899  491570 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 17:48:00.059989  491570 kubeadm.go:391] StartCluster: {Name:embed-certs-126148 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-126148 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:48:00.060089  491570 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0316 17:48:00.060176  491570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 17:48:00.206620  491570 cri.go:89] found id: ""
	I0316 17:48:00.206805  491570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 17:48:00.273976  491570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 17:48:00.347521  491570 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0316 17:48:00.347859  491570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 17:48:00.373618  491570 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 17:48:00.373642  491570 kubeadm.go:156] found existing configuration files:
	
	I0316 17:48:00.373709  491570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 17:48:00.391788  491570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 17:48:00.391885  491570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 17:48:00.422215  491570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 17:48:00.450654  491570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 17:48:00.450736  491570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 17:48:00.466479  491570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 17:48:00.479511  491570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 17:48:00.479719  491570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 17:48:00.490869  491570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 17:48:00.503307  491570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 17:48:00.503479  491570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 17:48:00.515242  491570 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0316 17:48:00.585618  491570 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 17:48:00.586029  491570 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 17:48:00.645904  491570 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0316 17:48:00.645978  491570 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0316 17:48:00.646018  491570 kubeadm.go:309] OS: Linux
	I0316 17:48:00.646073  491570 kubeadm.go:309] CGROUPS_CPU: enabled
	I0316 17:48:00.646124  491570 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0316 17:48:00.646173  491570 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0316 17:48:00.646225  491570 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0316 17:48:00.646283  491570 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0316 17:48:00.646336  491570 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0316 17:48:00.646386  491570 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0316 17:48:00.646436  491570 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0316 17:48:00.646487  491570 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0316 17:48:00.729198  491570 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 17:48:00.729414  491570 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 17:48:00.729562  491570 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 17:48:00.991046  491570 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 17:48:00.994821  491570 out.go:204]   - Generating certificates and keys ...
	I0316 17:48:00.994939  491570 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 17:48:00.995022  491570 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 17:48:01.707733  491570 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 17:48:02.493386  491570 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 17:48:03.211640  491570 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 17:48:03.631761  491570 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 17:48:03.882525  491570 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 17:48:03.882844  491570 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [embed-certs-126148 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0316 17:48:04.530357  491570 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 17:48:04.530762  491570 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-126148 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0316 17:48:05.271500  491570 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 17:48:06.724886  491570 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 17:48:06.936549  491570 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 17:48:06.936828  491570 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 17:48:07.999116  491570 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 17:48:08.377392  491570 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 17:48:08.583452  491570 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 17:48:09.033090  491570 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 17:48:09.033704  491570 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 17:48:09.036398  491570 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 17:48:09.125999  481631 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:48:09.141689  481631 api_server.go:72] duration metric: took 6m1.572659285s to wait for apiserver process to appear ...
	I0316 17:48:09.141715  481631 api_server.go:88] waiting for apiserver healthz status ...
	I0316 17:48:09.144420  481631 out.go:177] 
	W0316 17:48:09.146305  481631 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0316 17:48:09.146326  481631 out.go:239] * 
	W0316 17:48:09.147294  481631 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 17:48:09.148762  481631 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	9d15622a91ad5       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   ca2c282a6a69f       dashboard-metrics-scraper-8d5bb5db8-6p6nb
	747498059d66b       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   e85771003bd10       storage-provisioner
	a8228ba39ff72       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   21dee6330880f       kubernetes-dashboard-cd95d586-pcqp6
	8dd0ee223c90f       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   75f94f5b07827       coredns-74ff55c5b-jcdh5
	53266df997bef       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   d2a0df3a84ee3       kube-proxy-x59w9
	137f2de59c6dc       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   96958cbba0bfc       kindnet-6v5gx
	b1af79071c039       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   3a2aa814d9e17       busybox
	c5196a521ea11       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   e85771003bd10       storage-provisioner
	16d138fc440dd       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   d1630cb829798       etcd-old-k8s-version-746380
	bf0d1869cc0d6       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a2f8c2e49c69c       kube-scheduler-old-k8s-version-746380
	f6a137e8a3b14       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   7f8eada883dc1       kube-apiserver-old-k8s-version-746380
	c5661cb115edd       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   9bbef602e7a9a       kube-controller-manager-old-k8s-version-746380
	a9fb41035bdee       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   ab899972b2b6f       busybox
	a7700f61f9427       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   4dc5a976c2666       coredns-74ff55c5b-jcdh5
	22beb4846f86e       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   7fdbc246b6e97       kindnet-6v5gx
	ccbf82b14ebc8       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   71a0c35093042       kube-proxy-x59w9
	bb9fc8b360819       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   b52041f140091       kube-scheduler-old-k8s-version-746380
	3ffcf3139cf08       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   cb2999b55a22c       kube-controller-manager-old-k8s-version-746380
	0340e5ca0be60       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   aa9abb7c07a37       kube-apiserver-old-k8s-version-746380
	53e126d87d370       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   d6972a0d90ecc       etcd-old-k8s-version-746380
	
	
	==> containerd <==
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.596728755Z" level=info msg="CreateContainer within sandbox \"ca2c282a6a69f74cfde71c4496b45f06ca7e9ccfcdbfeb0e04e3f775bb75f900\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2\""
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.597400389Z" level=info msg="StartContainer for \"280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2\""
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.688951180Z" level=info msg="StartContainer for \"280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2\" returns successfully"
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.725010672Z" level=info msg="shim disconnected" id=280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.725292352Z" level=warning msg="cleaning up after shim disconnected" id=280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2 namespace=k8s.io
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.725373664Z" level=info msg="cleaning up dead shim"
	Mar 16 17:44:26 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:26.738660961Z" level=warning msg="cleanup warnings time=\"2024-03-16T17:44:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2972 runtime=io.containerd.runc.v2\n"
	Mar 16 17:44:27 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:27.155740667Z" level=info msg="RemoveContainer for \"cfc50a9c42f1aae764252c7e7375d1626102f8178e5d27ee752c6ef9f55c3153\""
	Mar 16 17:44:27 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:44:27.161875323Z" level=info msg="RemoveContainer for \"cfc50a9c42f1aae764252c7e7375d1626102f8178e5d27ee752c6ef9f55c3153\" returns successfully"
	Mar 16 17:45:22 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:22.557318591Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:45:22 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:22.562131674Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 16 17:45:22 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:22.563559323Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.559363479Z" level=info msg="CreateContainer within sandbox \"ca2c282a6a69f74cfde71c4496b45f06ca7e9ccfcdbfeb0e04e3f775bb75f900\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.574296223Z" level=info msg="CreateContainer within sandbox \"ca2c282a6a69f74cfde71c4496b45f06ca7e9ccfcdbfeb0e04e3f775bb75f900\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343\""
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.575178719Z" level=info msg="StartContainer for \"9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343\""
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.649725483Z" level=info msg="StartContainer for \"9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343\" returns successfully"
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.676411336Z" level=info msg="shim disconnected" id=9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.676472061Z" level=warning msg="cleaning up after shim disconnected" id=9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343 namespace=k8s.io
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.676483622Z" level=info msg="cleaning up dead shim"
	Mar 16 17:45:56 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:56.685577789Z" level=warning msg="cleanup warnings time=\"2024-03-16T17:45:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3202 runtime=io.containerd.runc.v2\n"
	Mar 16 17:45:57 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:57.381011007Z" level=info msg="RemoveContainer for \"280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2\""
	Mar 16 17:45:57 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:45:57.386327701Z" level=info msg="RemoveContainer for \"280e8899dc39a0e9718ee33b23f3d654702396a83c542fdaf765d262983984f2\" returns successfully"
	Mar 16 17:48:09 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:48:09.572392973Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:48:09 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:48:09.597966176Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 16 17:48:09 old-k8s-version-746380 containerd[567]: time="2024-03-16T17:48:09.600142620Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> coredns [8dd0ee223c90f99d346db9114977e56c2bbfecb904aa1223b3e8e1109264981d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35796 - 42710 "HINFO IN 8387435654806112351.8133578058127294849. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032282996s
	
	
	==> coredns [a7700f61f9427c51311df28b60dc3da67a68a1be40d1f17810185e95a656508c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51658 - 57749 "HINFO IN 4747854361398744785.3658144349413447401. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032130341s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-746380
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-746380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcb7bcec19ba52ac09364e1139fb2071215a1bc6
	                    minikube.k8s.io/name=old-k8s-version-746380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T17_39_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 17:39:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-746380
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 17:48:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 17:43:17 +0000   Sat, 16 Mar 2024 17:39:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 17:43:17 +0000   Sat, 16 Mar 2024 17:39:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 17:43:17 +0000   Sat, 16 Mar 2024 17:39:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 17:43:17 +0000   Sat, 16 Mar 2024 17:39:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-746380
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 82df2265139b4222a7d1497411fe78d4
	  System UUID:                2c5fbd87-0f9a-4cbf-9852-773b036b7168
	  Boot ID:                    183b8861-7db8-4da8-9969-d0fd94fbc14e
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 coredns-74ff55c5b-jcdh5                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m23s
	  kube-system                 etcd-old-k8s-version-746380                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m31s
	  kube-system                 kindnet-6v5gx                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m23s
	  kube-system                 kube-apiserver-old-k8s-version-746380             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-controller-manager-old-k8s-version-746380    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-proxy-x59w9                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-scheduler-old-k8s-version-746380             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 metrics-server-9975d5f86-s65lt                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-6p6nb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-pcqp6               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m51s (x4 over 8m51s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m32s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m31s                  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s                  kubelet     Node old-k8s-version-746380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s                  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m23s                  kubelet     Node old-k8s-version-746380 status is now: NodeReady
	  Normal  Starting                 8m22s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-746380 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m42s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001177] FS-Cache: O-key=[8] 'e03a5c0100000000'
	[  +0.000780] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001022] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000ea5f5674
	[  +0.001095] FS-Cache: N-key=[8] 'e03a5c0100000000'
	[  +0.003573] FS-Cache: Duplicate cookie detected
	[  +0.000861] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001171] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=0000000042287ac7
	[  +0.001177] FS-Cache: O-key=[8] 'e03a5c0100000000'
	[  +0.000754] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.001056] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000f962ef30
	[  +0.001144] FS-Cache: N-key=[8] 'e03a5c0100000000'
	[  +2.779373] FS-Cache: Duplicate cookie detected
	[  +0.000767] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000951] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=00000000f905028d
	[  +0.001067] FS-Cache: O-key=[8] 'df3a5c0100000000'
	[  +0.000705] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.001087] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000ea5f5674
	[  +0.001053] FS-Cache: N-key=[8] 'df3a5c0100000000'
	[  +0.347499] FS-Cache: Duplicate cookie detected
	[  +0.000798] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.001036] FS-Cache: O-cookie d=00000000bd90532f{9p.inode} n=000000006a03c990
	[  +0.001196] FS-Cache: O-key=[8] 'e53a5c0100000000'
	[  +0.000764] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000bd90532f{9p.inode} n=00000000f2586d18
	[  +0.001131] FS-Cache: N-key=[8] 'e53a5c0100000000'
	
	
	==> etcd [16d138fc440dd55f8a882b1a470bd88b116d89b1276ca648e106057f46db7677] <==
	2024-03-16 17:44:10.343145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:44:20.343196 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:44:30.343268 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:44:40.343311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:44:50.343326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:00.351906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:10.343301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:20.343196 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:30.343186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:40.343052 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:45:50.343215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:00.345181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:10.343258 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:20.343171 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:30.343175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:40.343073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:46:50.343343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:00.349019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:10.343243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:20.343145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:30.343303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:40.343327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:47:50.343182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:48:00.346512 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:48:10.343984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [53e126d87d370fc7c40afb41dc1a7f49e87707a49e2d1486adf3a6445555d955] <==
	raft2024/03/16 17:39:21 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/16 17:39:21 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/16 17:39:21 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/16 17:39:21 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-16 17:39:21.721487 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-16 17:39:21.723886 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-16 17:39:21.724048 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-16 17:39:21.724149 I | etcdserver: published {Name:old-k8s-version-746380 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-16 17:39:21.724233 I | embed: ready to serve client requests
	2024-03-16 17:39:21.726092 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-16 17:39:21.726289 I | embed: ready to serve client requests
	2024-03-16 17:39:21.736100 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-16 17:39:42.213925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:39:50.341098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:00.341768 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:10.340019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:20.339936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:30.339925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:40.339998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:40:50.340106 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:41:00.340231 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:41:10.340212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:41:20.339918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:41:30.340128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 17:41:40.340073 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 17:48:11 up  3:30,  0 users,  load average: 1.25, 1.87, 2.37
	Linux old-k8s-version-746380 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [137f2de59c6dc6edd43d77e791ff547f8b6673cd98d12a1046d38b593804d914] <==
	I0316 17:46:10.916403       1 main.go:227] handling current node
	I0316 17:46:20.923718       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:46:20.923745       1 main.go:227] handling current node
	I0316 17:46:30.934538       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:46:30.934564       1 main.go:227] handling current node
	I0316 17:46:40.952436       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:46:40.952465       1 main.go:227] handling current node
	I0316 17:46:50.959080       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:46:50.959108       1 main.go:227] handling current node
	I0316 17:47:00.978727       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:00.979037       1 main.go:227] handling current node
	I0316 17:47:10.988969       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:10.989000       1 main.go:227] handling current node
	I0316 17:47:21.006746       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:21.006777       1 main.go:227] handling current node
	I0316 17:47:31.027875       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:31.027906       1 main.go:227] handling current node
	I0316 17:47:41.045164       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:41.045391       1 main.go:227] handling current node
	I0316 17:47:51.055638       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:47:51.055667       1 main.go:227] handling current node
	I0316 17:48:01.069718       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:48:01.069746       1 main.go:227] handling current node
	I0316 17:48:11.080770       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:48:11.080804       1 main.go:227] handling current node
	
	
	==> kindnet [22beb4846f86e0b94f967a82643633bda14c92a967549166bf63c77fcd3a5673] <==
	I0316 17:39:49.426779       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0316 17:39:49.426893       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0316 17:39:49.427025       1 main.go:116] setting mtu 1500 for CNI 
	I0316 17:39:49.427039       1 main.go:146] kindnetd IP family: "ipv4"
	I0316 17:39:49.427051       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0316 17:40:19.648082       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0316 17:40:19.662556       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:40:19.662587       1 main.go:227] handling current node
	I0316 17:40:29.679987       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:40:29.680014       1 main.go:227] handling current node
	I0316 17:40:39.693128       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:40:39.693156       1 main.go:227] handling current node
	I0316 17:40:49.705625       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:40:49.705653       1 main.go:227] handling current node
	I0316 17:40:59.711818       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:40:59.711847       1 main.go:227] handling current node
	I0316 17:41:09.724055       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:41:09.724084       1 main.go:227] handling current node
	I0316 17:41:19.749078       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:41:19.749313       1 main.go:227] handling current node
	I0316 17:41:29.844971       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:41:29.845089       1 main.go:227] handling current node
	I0316 17:41:39.913050       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0316 17:41:39.913077       1 main.go:227] handling current node
	
	
	==> kube-apiserver [0340e5ca0be60b47abce880f66d4c4e5fc876c20b19e0b5c769ec2a4f1b8547b] <==
	I0316 17:39:29.449233       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0316 17:39:29.449271       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0316 17:39:29.456257       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0316 17:39:29.461762       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0316 17:39:29.461782       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0316 17:39:29.958240       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0316 17:39:29.998725       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0316 17:39:30.140919       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0316 17:39:30.142621       1 controller.go:606] quota admission added evaluator for: endpoints
	I0316 17:39:30.149729       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0316 17:39:31.128644       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0316 17:39:31.521766       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0316 17:39:31.614025       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0316 17:39:39.944915       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0316 17:39:48.247550       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0316 17:39:48.497244       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0316 17:40:06.753299       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:40:06.753387       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:40:06.753404       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 17:40:49.529160       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:40:49.529203       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:40:49.529214       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 17:41:20.441946       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:41:20.441989       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:41:20.441997       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [f6a137e8a3b1485dd10f52919de9c0fef41fc33d23e13d15ecd70b4ee918c6d5] <==
	I0316 17:44:45.271648       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:44:45.271687       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0316 17:45:29.672540       1 handler_proxy.go:102] no RequestInfo found in the context
	E0316 17:45:29.672611       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 17:45:29.672620       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 17:45:29.980181       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:45:29.980234       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:45:29.980243       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 17:46:07.058528       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:46:07.058571       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:46:07.058580       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 17:46:45.815561       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:46:45.815672       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:46:45.815692       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 17:47:19.228406       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:47:19.228450       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:47:19.228458       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0316 17:47:27.749333       1 handler_proxy.go:102] no RequestInfo found in the context
	E0316 17:47:27.749417       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 17:47:27.749428       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 17:47:57.958156       1 client.go:360] parsed scheme: "passthrough"
	I0316 17:47:57.958198       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 17:47:57.958205       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [3ffcf3139cf08c2e735e53f3bed4469b3466bcbedc7c3cb0bba55d896472640b] <==
	I0316 17:39:48.304683       1 shared_informer.go:247] Caches are synced for disruption 
	I0316 17:39:48.304889       1 disruption.go:339] Sending events to api server.
	I0316 17:39:48.354788       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0316 17:39:48.408015       1 shared_informer.go:247] Caches are synced for job 
	I0316 17:39:48.441215       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4bzb6"
	I0316 17:39:48.448936       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0316 17:39:48.460322       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0316 17:39:48.465526       1 shared_informer.go:247] Caches are synced for taint 
	I0316 17:39:48.465598       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0316 17:39:48.465660       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-746380. Assuming now as a timestamp.
	I0316 17:39:48.465698       1 node_lifecycle_controller.go:1245] Controller detected that zone  is now in state Normal.
	I0316 17:39:48.465980       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0316 17:39:48.470113       1 event.go:291] "Event occurred" object="old-k8s-version-746380" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-746380 event: Registered Node old-k8s-version-746380 in Controller"
	I0316 17:39:48.471445       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jcdh5"
	I0316 17:39:48.487772       1 shared_informer.go:247] Caches are synced for resource quota 
	I0316 17:39:48.505781       1 shared_informer.go:247] Caches are synced for resource quota 
	I0316 17:39:48.542624       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6v5gx"
	I0316 17:39:48.542650       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x59w9"
	I0316 17:39:48.655799       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0316 17:39:48.873407       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0316 17:39:48.889678       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0316 17:39:48.889699       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0316 17:39:50.129088       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0316 17:39:50.193310       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4bzb6"
	I0316 17:41:46.400400       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [c5661cb115eddb01bce4d502126d119b47c1c22da24660a5c5d57202fad6e10e] <==
	W0316 17:43:51.915144       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:44:17.884065       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:44:23.565796       1 request.go:655] Throttling request took 1.048139043s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0316 17:44:24.417312       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:44:48.385949       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:44:56.067706       1 request.go:655] Throttling request took 1.046771291s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0316 17:44:56.919212       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:45:18.887899       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:45:28.569920       1 request.go:655] Throttling request took 1.048355944s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0316 17:45:29.421501       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:45:49.389743       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:46:01.072026       1 request.go:655] Throttling request took 1.042577648s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0316 17:46:01.923532       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:46:19.891642       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:46:33.574019       1 request.go:655] Throttling request took 1.048502492s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0316 17:46:34.425887       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:46:50.394179       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:47:06.076336       1 request.go:655] Throttling request took 1.047954981s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0316 17:47:06.927930       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:47:20.896482       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:47:38.578373       1 request.go:655] Throttling request took 1.048372265s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0316 17:47:39.429915       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 17:47:51.398447       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 17:48:11.083713       1 request.go:655] Throttling request took 1.046522305s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0316 17:48:11.935468       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [53266df997beffb4f7bfa6609d282d4f498bcdb315a85073da81dd740c85139f] <==
	I0316 17:42:29.975117       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0316 17:42:29.975196       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0316 17:42:29.992855       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0316 17:42:29.992947       1 server_others.go:185] Using iptables Proxier.
	I0316 17:42:29.993163       1 server.go:650] Version: v1.20.0
	I0316 17:42:29.996616       1 config.go:315] Starting service config controller
	I0316 17:42:29.996671       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0316 17:42:29.996957       1 config.go:224] Starting endpoint slice config controller
	I0316 17:42:29.996986       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0316 17:42:30.097090       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0316 17:42:30.097145       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [ccbf82b14ebc82618a0db0f8cce371995c37bb4d2cd2b873a46ac53578fbec9b] <==
	I0316 17:39:49.427473       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0316 17:39:49.427801       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0316 17:39:49.483797       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0316 17:39:49.484286       1 server_others.go:185] Using iptables Proxier.
	I0316 17:39:49.484528       1 server.go:650] Version: v1.20.0
	I0316 17:39:49.485943       1 config.go:315] Starting service config controller
	I0316 17:39:49.485966       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0316 17:39:49.486002       1 config.go:224] Starting endpoint slice config controller
	I0316 17:39:49.486009       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0316 17:39:49.589448       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0316 17:39:49.589506       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [bb9fc8b360819c7a19f5e182ffa90ecf3dc71344631dac019d43ec3d489bbb79] <==
	W0316 17:39:28.634042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 17:39:28.634069       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 17:39:28.634082       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 17:39:28.634088       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 17:39:28.725235       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0316 17:39:28.725329       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 17:39:28.725344       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 17:39:28.725358       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0316 17:39:28.733406       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0316 17:39:28.739452       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0316 17:39:28.739731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0316 17:39:28.739852       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0316 17:39:28.740336       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0316 17:39:28.750910       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0316 17:39:28.751454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0316 17:39:28.751954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0316 17:39:28.752192       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0316 17:39:28.755357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0316 17:39:28.755447       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0316 17:39:28.755512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0316 17:39:29.546204       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0316 17:39:29.621441       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0316 17:39:29.624648       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0316 17:39:29.791867       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0316 17:39:32.525484       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [bf0d1869cc0d68bb43a663a92a7a2eb950593536676cefca598146c6f602803e] <==
	I0316 17:42:20.648199       1 serving.go:331] Generated self-signed cert in-memory
	W0316 17:42:26.728420       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 17:42:26.728464       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 17:42:26.728475       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 17:42:26.728498       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 17:42:27.035875       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 17:42:27.035917       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 17:42:27.047264       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0316 17:42:27.047352       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0316 17:42:27.241237       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 16 17:46:39 old-k8s-version-746380 kubelet[660]: E0316 17:46:39.559649     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:46:49 old-k8s-version-746380 kubelet[660]: E0316 17:46:49.557008     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:46:53 old-k8s-version-746380 kubelet[660]: I0316 17:46:53.556478     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:46:53 old-k8s-version-746380 kubelet[660]: E0316 17:46:53.557338     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:04 old-k8s-version-746380 kubelet[660]: E0316 17:47:04.557220     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:47:05 old-k8s-version-746380 kubelet[660]: I0316 17:47:05.556850     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:47:05 old-k8s-version-746380 kubelet[660]: E0316 17:47:05.557240     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: I0316 17:47:16.556350     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557143     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:16 old-k8s-version-746380 kubelet[660]: E0316 17:47:16.557232     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: I0316 17:47:27.560101     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.560424     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:27 old-k8s-version-746380 kubelet[660]: E0316 17:47:27.561727     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: I0316 17:47:39.564973     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:47:39 old-k8s-version-746380 kubelet[660]: E0316 17:47:39.565385     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:42 old-k8s-version-746380 kubelet[660]: E0316 17:47:42.561108     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: I0316 17:47:50.556317     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:47:50 old-k8s-version-746380 kubelet[660]: E0316 17:47:50.556652     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:47:56 old-k8s-version-746380 kubelet[660]: E0316 17:47:56.557033     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 17:48:01 old-k8s-version-746380 kubelet[660]: I0316 17:48:01.556497     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 9d15622a91ad5295d0628c1803fdd4c3e1328c9b796d7ba9c433b24c393af343
	Mar 16 17:48:01 old-k8s-version-746380 kubelet[660]: E0316 17:48:01.556851     660 pod_workers.go:191] Error syncing pod 44cccf23-c5b3-46c9-b387-cc05351e79ed ("dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-6p6nb_kubernetes-dashboard(44cccf23-c5b3-46c9-b387-cc05351e79ed)"
	Mar 16 17:48:09 old-k8s-version-746380 kubelet[660]: E0316 17:48:09.601900     660 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 16 17:48:09 old-k8s-version-746380 kubelet[660]: E0316 17:48:09.601946     660 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 16 17:48:09 old-k8s-version-746380 kubelet[660]: E0316 17:48:09.602076     660 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-dlrz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-s65lt_kube-system(cd17aa5
7-4a12-49cf-9cb2-d519126a78d2): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Mar 16 17:48:09 old-k8s-version-746380 kubelet[660]: E0316 17:48:09.602113     660 pod_workers.go:191] Error syncing pod cd17aa57-4a12-49cf-9cb2-d519126a78d2 ("metrics-server-9975d5f86-s65lt_kube-system(cd17aa57-4a12-49cf-9cb2-d519126a78d2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [a8228ba39ff72ee5a9f0f601ff331405a3653e6a672688d3942fd43ebd1f5ff0] <==
	2024/03/16 17:42:54 Using namespace: kubernetes-dashboard
	2024/03/16 17:42:54 Using in-cluster config to connect to apiserver
	2024/03/16 17:42:54 Using secret token for csrf signing
	2024/03/16 17:42:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/16 17:42:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/16 17:42:54 Successful initial request to the apiserver, version: v1.20.0
	2024/03/16 17:42:54 Generating JWE encryption key
	2024/03/16 17:42:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/16 17:42:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/16 17:42:54 Initializing JWE encryption key from synchronized object
	2024/03/16 17:42:54 Creating in-cluster Sidecar client
	2024/03/16 17:42:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:42:54 Serving insecurely on HTTP port: 9090
	2024/03/16 17:43:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:43:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:44:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:44:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:45:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:45:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:46:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:46:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:47:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:47:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 17:42:54 Starting overwatch
	
	
	==> storage-provisioner [747498059d66bf6a35719a49e025168cdec4e997bd41ff614c40cd4518774adb] <==
	I0316 17:43:14.746165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 17:43:14.789948       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 17:43:14.789994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 17:43:32.281193       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 17:43:32.293473       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b2da0b3-9354-4bad-a6d0-4e47b2355ece", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-746380_1a016025-a0ae-47a3-bddb-c524cc02d728 became leader
	I0316 17:43:32.293721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-746380_1a016025-a0ae-47a3-bddb-c524cc02d728!
	I0316 17:43:32.400786       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-746380_1a016025-a0ae-47a3-bddb-c524cc02d728!
	
	
	==> storage-provisioner [c5196a521ea11d8df3329b51f670d2873b2e489ba1e6d7bad59e4d1a58567aaf] <==
	I0316 17:42:28.946914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 17:42:58.948972       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-746380 -n old-k8s-version-746380
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-746380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-s65lt
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-746380 describe pod metrics-server-9975d5f86-s65lt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-746380 describe pod metrics-server-9975d5f86-s65lt: exit status 1 (221.225282ms)

                                                
                                                
** stderr ** 
	E0316 17:48:13.525042  494404 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0316 17:48:13.556303  494404 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0316 17:48:13.566594  494404 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0316 17:48:13.570012  494404 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0316 17:48:13.583362  494404 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0316 17:48:13.587895  494404 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	Error from server (NotFound): pods "metrics-server-9975d5f86-s65lt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-746380 describe pod metrics-server-9975d5f86-s65lt: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.73s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 10.17
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 11.74
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 119.91
38 TestAddons/parallel/Registry 15.78
40 TestAddons/parallel/InspektorGadget 11.98
41 TestAddons/parallel/MetricsServer 5.86
44 TestAddons/parallel/CSI 65.3
45 TestAddons/parallel/Headlamp 10.46
46 TestAddons/parallel/CloudSpanner 5.81
47 TestAddons/parallel/LocalPath 53.73
48 TestAddons/parallel/NvidiaDevicePlugin 6.59
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 12.25
54 TestCertOptions 36.9
55 TestCertExpiration 233.19
57 TestForceSystemdFlag 43.77
58 TestForceSystemdEnv 46.55
59 TestDockerEnvContainerd 48.07
64 TestErrorSpam/setup 31.7
65 TestErrorSpam/start 0.73
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.8
69 TestErrorSpam/stop 1.47
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 57.39
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.08
81 TestFunctional/serial/CacheCmd/cache/add_local 1.49
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
86 TestFunctional/serial/CacheCmd/cache/delete 0.15
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 47.01
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.69
92 TestFunctional/serial/LogsFileCmd 2.08
93 TestFunctional/serial/InvalidService 4.64
95 TestFunctional/parallel/ConfigCmd 0.46
96 TestFunctional/parallel/DashboardCmd 8.7
97 TestFunctional/parallel/DryRun 0.54
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.33
103 TestFunctional/parallel/ServiceCmdConnect 10.69
104 TestFunctional/parallel/AddonsCmd 0.21
105 TestFunctional/parallel/PersistentVolumeClaim 26.26
107 TestFunctional/parallel/SSHCmd 0.73
108 TestFunctional/parallel/CpCmd 2.46
110 TestFunctional/parallel/FileSync 0.31
111 TestFunctional/parallel/CertSync 2.14
115 TestFunctional/parallel/NodeLabels 0.16
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
119 TestFunctional/parallel/License 0.33
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.48
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 6.21
132 TestFunctional/parallel/ServiceCmd/List 0.72
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.6
134 TestFunctional/parallel/ProfileCmd/profile_list 0.6
135 TestFunctional/parallel/ServiceCmd/JSONOutput 0.67
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
138 TestFunctional/parallel/MountCmd/any-port 7.92
139 TestFunctional/parallel/ServiceCmd/Format 0.58
140 TestFunctional/parallel/ServiceCmd/URL 0.4
141 TestFunctional/parallel/MountCmd/specific-port 2.78
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.31
143 TestFunctional/parallel/Version/short 0.1
144 TestFunctional/parallel/Version/components 1.32
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.76
150 TestFunctional/parallel/ImageCommands/Setup 2.55
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.03
167 TestMultiControlPlane/serial/StartCluster 133.34
168 TestMultiControlPlane/serial/DeployApp 31.57
169 TestMultiControlPlane/serial/PingHostFromPods 1.76
170 TestMultiControlPlane/serial/AddWorkerNode 23.88
171 TestMultiControlPlane/serial/NodeLabels 0.11
172 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
173 TestMultiControlPlane/serial/CopyFile 19.98
174 TestMultiControlPlane/serial/StopSecondaryNode 12.88
175 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
176 TestMultiControlPlane/serial/RestartSecondaryNode 18.57
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
178 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.53
179 TestMultiControlPlane/serial/DeleteSecondaryNode 11.34
180 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMultiControlPlane/serial/StopCluster 25.27
182 TestMultiControlPlane/serial/RestartCluster 78.44
183 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
184 TestMultiControlPlane/serial/AddSecondaryNode 45.36
185 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.81
189 TestJSONOutput/start/Command 55.68
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.75
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.67
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.82
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 42.3
215 TestKicCustomNetwork/use_default_bridge_network 37.18
216 TestKicExistingNetwork 34.28
217 TestKicCustomSubnet 34.56
218 TestKicStaticIP 33.66
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 75.25
223 TestMountStart/serial/StartWithMountFirst 6.38
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 7.32
226 TestMountStart/serial/VerifyMountSecond 0.29
227 TestMountStart/serial/DeleteFirst 1.61
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.19
230 TestMountStart/serial/RestartStopped 8.37
231 TestMountStart/serial/VerifyMountPostStop 0.27
234 TestMultiNode/serial/FreshStart2Nodes 74.8
235 TestMultiNode/serial/DeployApp2Nodes 4.57
236 TestMultiNode/serial/PingHostFrom2Pods 1.3
237 TestMultiNode/serial/AddNode 18.76
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.33
240 TestMultiNode/serial/CopyFile 10.41
241 TestMultiNode/serial/StopNode 2.3
242 TestMultiNode/serial/StartAfterStop 9.38
243 TestMultiNode/serial/RestartKeepsNodes 85.66
244 TestMultiNode/serial/DeleteNode 5.4
245 TestMultiNode/serial/StopMultiNode 23.98
246 TestMultiNode/serial/RestartMultiNode 55.98
247 TestMultiNode/serial/ValidateNameConflict 33.96
252 TestPreload 118.66
254 TestScheduledStopUnix 111.06
257 TestInsufficientStorage 10.04
258 TestRunningBinaryUpgrade 79.95
260 TestKubernetesUpgrade 393.7
261 TestMissingContainerUpgrade 169.93
263 TestPause/serial/Start 64.27
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
266 TestNoKubernetes/serial/StartWithK8s 45.6
267 TestNoKubernetes/serial/StartWithStopK8s 16.11
268 TestNoKubernetes/serial/Start 8.75
269 TestPause/serial/SecondStartNoReconfiguration 6.79
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 1.21
272 TestPause/serial/Pause 0.92
273 TestPause/serial/VerifyStatus 0.44
274 TestNoKubernetes/serial/Stop 1.38
275 TestPause/serial/Unpause 0.74
276 TestPause/serial/PauseAgain 1.14
277 TestNoKubernetes/serial/StartNoArgs 7.18
278 TestPause/serial/DeletePaused 2.81
279 TestPause/serial/VerifyDeletedResources 0.35
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
281 TestStoppedBinaryUpgrade/Setup 1.17
282 TestStoppedBinaryUpgrade/Upgrade 117.02
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
298 TestNetworkPlugins/group/false 5.04
303 TestStartStop/group/old-k8s-version/serial/FirstStart 168.49
305 TestStartStop/group/no-preload/serial/FirstStart 79.75
306 TestStartStop/group/old-k8s-version/serial/DeployApp 9.83
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.72
308 TestStartStop/group/old-k8s-version/serial/Stop 12.41
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.39
311 TestStartStop/group/no-preload/serial/DeployApp 8.51
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.41
313 TestStartStop/group/no-preload/serial/Stop 12.86
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/no-preload/serial/SecondStart 267.35
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
319 TestStartStop/group/no-preload/serial/Pause 3.39
321 TestStartStop/group/embed-certs/serial/FirstStart 65.79
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.15
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/old-k8s-version/serial/Pause 3.33
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 62.13
328 TestStartStop/group/embed-certs/serial/DeployApp 9.59
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.78
330 TestStartStop/group/embed-certs/serial/Stop 12.29
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
332 TestStartStop/group/embed-certs/serial/SecondStart 267.74
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.58
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.24
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.41
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
341 TestStartStop/group/embed-certs/serial/Pause 3.11
343 TestStartStop/group/newest-cni/serial/FirstStart 49
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.69
348 TestNetworkPlugins/group/auto/Start 67.38
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.75
351 TestStartStop/group/newest-cni/serial/Stop 1.69
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
353 TestStartStop/group/newest-cni/serial/SecondStart 24.63
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
357 TestStartStop/group/newest-cni/serial/Pause 3.94
358 TestNetworkPlugins/group/kindnet/Start 66.83
359 TestNetworkPlugins/group/auto/KubeletFlags 0.42
360 TestNetworkPlugins/group/auto/NetCatPod 9.39
361 TestNetworkPlugins/group/auto/DNS 0.28
362 TestNetworkPlugins/group/auto/Localhost 0.29
363 TestNetworkPlugins/group/auto/HairPin 0.25
364 TestNetworkPlugins/group/calico/Start 74.56
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
368 TestNetworkPlugins/group/kindnet/DNS 0.28
369 TestNetworkPlugins/group/kindnet/Localhost 0.24
370 TestNetworkPlugins/group/kindnet/HairPin 0.26
371 TestNetworkPlugins/group/custom-flannel/Start 64.4
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.41
374 TestNetworkPlugins/group/calico/NetCatPod 10.36
375 TestNetworkPlugins/group/calico/DNS 0.23
376 TestNetworkPlugins/group/calico/Localhost 0.21
377 TestNetworkPlugins/group/calico/HairPin 0.19
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.41
380 TestNetworkPlugins/group/enable-default-cni/Start 84.37
381 TestNetworkPlugins/group/custom-flannel/DNS 0.18
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
384 TestNetworkPlugins/group/flannel/Start 62.15
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
390 TestNetworkPlugins/group/flannel/ControllerPod 6.01
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
392 TestNetworkPlugins/group/flannel/NetCatPod 9.37
393 TestNetworkPlugins/group/flannel/DNS 0.3
394 TestNetworkPlugins/group/flannel/Localhost 0.27
395 TestNetworkPlugins/group/flannel/HairPin 0.22
396 TestNetworkPlugins/group/bridge/Start 86.82
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
398 TestNetworkPlugins/group/bridge/NetCatPod 9.26
399 TestNetworkPlugins/group/bridge/DNS 0.19
400 TestNetworkPlugins/group/bridge/Localhost 0.17
401 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.20.0/json-events (8.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-847118 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-847118 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.571830671s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-847118
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-847118: exit status 85 (83.286793ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-847118 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |          |
	|         | -p download-only-847118        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:15.340090  285638 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:15.340289  285638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:15.340318  285638 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:15.340343  285638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:15.340615  285638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	W0316 16:55:15.340773  285638 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18277-280225/.minikube/config/config.json: open /home/jenkins/minikube-integration/18277-280225/.minikube/config/config.json: no such file or directory
	I0316 16:55:15.341193  285638 out.go:298] Setting JSON to true
	I0316 16:55:15.342056  285638 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9462,"bootTime":1710598654,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 16:55:15.342184  285638 start.go:139] virtualization:  
	I0316 16:55:15.345324  285638 out.go:97] [download-only-847118] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W0316 16:55:15.345514  285638 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball: no such file or directory
	I0316 16:55:15.345559  285638 notify.go:220] Checking for updates...
	I0316 16:55:15.347553  285638 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:15.350302  285638 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:15.352226  285638 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 16:55:15.354115  285638 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 16:55:15.355995  285638 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0316 16:55:15.359261  285638 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:15.359503  285638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:15.381678  285638 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 16:55:15.381781  285638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:15.446360  285638 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-16 16:55:15.437184851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:15.446467  285638 docker.go:295] overlay module found
	I0316 16:55:15.448762  285638 out.go:97] Using the docker driver based on user configuration
	I0316 16:55:15.448785  285638 start.go:297] selected driver: docker
	I0316 16:55:15.448792  285638 start.go:901] validating driver "docker" against <nil>
	I0316 16:55:15.448900  285638 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:15.502916  285638 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-16 16:55:15.494331569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:15.503071  285638 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:15.503361  285638 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0316 16:55:15.503511  285638 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:15.505740  285638 out.go:169] Using Docker driver with root privileges
	I0316 16:55:15.507928  285638 cni.go:84] Creating CNI manager for ""
	I0316 16:55:15.507949  285638 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:55:15.507963  285638 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:15.508076  285638 start.go:340] cluster config:
	{Name:download-only-847118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-847118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:55:15.510217  285638 out.go:97] Starting "download-only-847118" primary control-plane node in "download-only-847118" cluster
	I0316 16:55:15.510235  285638 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 16:55:15.512289  285638 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0316 16:55:15.512330  285638 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 16:55:15.512481  285638 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 16:55:15.527038  285638 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0316 16:55:15.527735  285638 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0316 16:55:15.527849  285638 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0316 16:55:15.579961  285638 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:15.579990  285638 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:15.580686  285638 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 16:55:15.582920  285638 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0316 16:55:15.582944  285638 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:15.700632  285638 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-847118 host does not exist
	  To start a cluster, run: "minikube start -p download-only-847118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-847118
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (10.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-534627 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-534627 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.164865295s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (10.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-534627
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-534627: exit status 85 (87.337444ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-847118 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-847118        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-847118        | download-only-847118 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-534627 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-534627        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:24.371719  285796 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:24.371858  285796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:24.371870  285796 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:24.371876  285796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:24.372141  285796 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 16:55:24.372564  285796 out.go:298] Setting JSON to true
	I0316 16:55:24.373367  285796 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9471,"bootTime":1710598654,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 16:55:24.373450  285796 start.go:139] virtualization:  
	I0316 16:55:24.376179  285796 out.go:97] [download-only-534627] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 16:55:24.378065  285796 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:24.376441  285796 notify.go:220] Checking for updates...
	I0316 16:55:24.380225  285796 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:24.382308  285796 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 16:55:24.384182  285796 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 16:55:24.386312  285796 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0316 16:55:24.390514  285796 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:24.390803  285796 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:24.415941  285796 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 16:55:24.416070  285796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:24.480201  285796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-16 16:55:24.471047573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:24.480304  285796 docker.go:295] overlay module found
	I0316 16:55:24.482600  285796 out.go:97] Using the docker driver based on user configuration
	I0316 16:55:24.482625  285796 start.go:297] selected driver: docker
	I0316 16:55:24.482631  285796 start.go:901] validating driver "docker" against <nil>
	I0316 16:55:24.482740  285796 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:24.540328  285796 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-16 16:55:24.531303024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:24.540499  285796 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:24.540783  285796 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0316 16:55:24.540945  285796 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:24.543269  285796 out.go:169] Using Docker driver with root privileges
	I0316 16:55:24.545153  285796 cni.go:84] Creating CNI manager for ""
	I0316 16:55:24.545183  285796 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:55:24.545193  285796 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:24.545292  285796 start.go:340] cluster config:
	{Name:download-only-534627 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-534627 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:55:24.547661  285796 out.go:97] Starting "download-only-534627" primary control-plane node in "download-only-534627" cluster
	I0316 16:55:24.547693  285796 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 16:55:24.549627  285796 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0316 16:55:24.549662  285796 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:24.549784  285796 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 16:55:24.564881  285796 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0316 16:55:24.565014  285796 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0316 16:55:24.565032  285796 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0316 16:55:24.565037  285796 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0316 16:55:24.565055  285796 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0316 16:55:24.617949  285796 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:24.617988  285796 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:24.618755  285796 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:24.621195  285796 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0316 16:55:24.621241  285796 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:24.730619  285796 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:32.184946  285796 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:32.185062  285796 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:33.107147  285796 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0316 16:55:33.107525  285796 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/download-only-534627/config.json ...
	I0316 16:55:33.107560  285796 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/download-only-534627/config.json: {Name:mk914c42780bd7543563da29f94684bac6d4e4a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:55:33.108354  285796 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:33.109049  285796 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18277-280225/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-534627 host does not exist
	  To start a cluster, run: "minikube start -p download-only-534627"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-534627
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (11.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-892980 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-892980 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.744431751s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (11.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-892980
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-892980: exit status 85 (82.842703ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-847118 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-847118           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-847118           | download-only-847118 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only           | download-only-534627 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-534627           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-534627           | download-only-534627 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only           | download-only-892980 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-892980           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:34.968058  285963 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:34.968247  285963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:34.968276  285963 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:34.968298  285963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:34.968559  285963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 16:55:34.968962  285963 out.go:298] Setting JSON to true
	I0316 16:55:34.969808  285963 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9481,"bootTime":1710598654,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 16:55:34.969930  285963 start.go:139] virtualization:  
	I0316 16:55:34.972788  285963 out.go:97] [download-only-892980] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 16:55:34.975391  285963 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:34.973032  285963 notify.go:220] Checking for updates...
	I0316 16:55:34.977837  285963 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:34.980406  285963 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 16:55:34.982225  285963 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 16:55:34.984113  285963 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0316 16:55:34.988249  285963 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:34.988560  285963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:35.014061  285963 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 16:55:35.014190  285963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:35.081290  285963 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-16 16:55:35.070711549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:35.081413  285963 docker.go:295] overlay module found
	I0316 16:55:35.083524  285963 out.go:97] Using the docker driver based on user configuration
	I0316 16:55:35.083564  285963 start.go:297] selected driver: docker
	I0316 16:55:35.083573  285963 start.go:901] validating driver "docker" against <nil>
	I0316 16:55:35.083780  285963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 16:55:35.143930  285963 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-16 16:55:35.134058378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 16:55:35.144123  285963 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:35.144409  285963 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0316 16:55:35.144573  285963 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:35.146626  285963 out.go:169] Using Docker driver with root privileges
	I0316 16:55:35.148311  285963 cni.go:84] Creating CNI manager for ""
	I0316 16:55:35.148335  285963 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0316 16:55:35.148344  285963 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:35.148429  285963 start.go:340] cluster config:
	{Name:download-only-892980 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-892980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0316 16:55:35.150529  285963 out.go:97] Starting "download-only-892980" primary control-plane node in "download-only-892980" cluster
	I0316 16:55:35.150555  285963 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0316 16:55:35.152484  285963 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0316 16:55:35.152517  285963 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 16:55:35.152690  285963 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0316 16:55:35.168003  285963 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0316 16:55:35.168158  285963 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0316 16:55:35.168183  285963 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0316 16:55:35.168191  285963 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0316 16:55:35.168200  285963 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0316 16:55:35.224140  285963 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:35.224166  285963 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:35.224325  285963 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 16:55:35.226290  285963 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0316 16:55:35.226322  285963 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:35.324586  285963 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0316 16:55:39.915028  285963 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:39.915167  285963 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18277-280225/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0316 16:55:40.789718  285963 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0316 16:55:40.790146  285963 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/download-only-892980/config.json ...
	I0316 16:55:40.790191  285963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/download-only-892980/config.json: {Name:mk08870ef9ce24290d873282578d06775d9b23af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 16:55:40.790941  285963 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 16:55:40.791106  285963 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18277-280225/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-892980 host does not exist
	  To start a cluster, run: "minikube start -p download-only-892980"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-892980
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-850856 --alsologtostderr --binary-mirror http://127.0.0.1:39169 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-850856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-850856
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-821353
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-821353: exit status 85 (84.969911ms)

                                                
                                                
-- stdout --
	* Profile "addons-821353" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-821353"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-821353
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-821353: exit status 85 (73.072728ms)

                                                
                                                
-- stdout --
	* Profile "addons-821353" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-821353"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (119.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-821353 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-821353 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m59.909592439s)
--- PASS: TestAddons/Setup (119.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 59.927232ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-8vzh9" [b1612e75-26f6-4416-91f0-f432903d0021] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005694283s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gpbwt" [5aa0d90a-8b19-4457-ac50-9fc1a7169ebc] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005124269s
addons_test.go:340: (dbg) Run:  kubectl --context addons-821353 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-821353 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-821353 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.5519817s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 ip
2024/03/16 16:58:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vclz8" [7a8b583b-35a2-4361-8070-7085b74e399c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004197878s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-821353
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-821353: (5.97278738s)
--- PASS: TestAddons/parallel/InspektorGadget (11.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 6.220073ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-2rrqs" [08f4dba8-7457-424a-8c11-cc37bee4ee10] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005810259s
addons_test.go:415: (dbg) Run:  kubectl --context addons-821353 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 60.78015ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-821353 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-821353 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [30cad8b7-7d06-4bad-bd73-07247f9eb992] Pending
helpers_test.go:344: "task-pv-pod" [30cad8b7-7d06-4bad-bd73-07247f9eb992] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [30cad8b7-7d06-4bad-bd73-07247f9eb992] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004758901s
addons_test.go:584: (dbg) Run:  kubectl --context addons-821353 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-821353 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-821353 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-821353 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-821353 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-821353 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-821353 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [35ffde7d-0535-4100-b0e3-cd4ea7f6cf9d] Pending
helpers_test.go:344: "task-pv-pod-restore" [35ffde7d-0535-4100-b0e3-cd4ea7f6cf9d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [35ffde7d-0535-4100-b0e3-cd4ea7f6cf9d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004926634s
addons_test.go:626: (dbg) Run:  kubectl --context addons-821353 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-821353 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-821353 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-821353 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.801020326s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (65.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-821353 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-821353 --alsologtostderr -v=1: (1.459519224s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-jmmqf" [bdc077a4-ee16-4f49-b3ef-9eae07aaab2b] Pending
helpers_test.go:344: "headlamp-5485c556b-jmmqf" [bdc077a4-ee16-4f49-b3ef-9eae07aaab2b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-jmmqf" [bdc077a4-ee16-4f49-b3ef-9eae07aaab2b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004274126s
--- PASS: TestAddons/parallel/Headlamp (10.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-rw4w5" [bbccabe9-fd37-4b1e-8644-df0dd28d6f9e] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006383336s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-821353
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.73s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-821353 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-821353 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ba09c739-4c42-48aa-9237-4068f91832c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ba09c739-4c42-48aa-9237-4068f91832c3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ba09c739-4c42-48aa-9237-4068f91832c3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004106226s
addons_test.go:891: (dbg) Run:  kubectl --context addons-821353 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 ssh "cat /opt/local-path-provisioner/pvc-f61fb113-086f-438d-bdfb-3006d0f3556b_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-821353 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-821353 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-821353 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-821353 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.264624202s)
--- PASS: TestAddons/parallel/LocalPath (53.73s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qkppj" [c0c89264-7552-4313-b0e3-a9203afe811f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004725019s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-821353
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.59s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-qmd85" [962adc2e-1eeb-4835-8537-bfd0e7f8eb2c] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00442051s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-821353 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-821353 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-821353
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-821353: (11.962604415s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-821353
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-821353
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-821353
--- PASS: TestAddons/StoppedEnableDisable (12.25s)

                                                
                                    
x
+
TestCertOptions (36.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-380412 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-380412 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.124203098s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-380412 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-380412 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-380412 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-380412" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-380412
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-380412: (2.125125462s)
--- PASS: TestCertOptions (36.90s)

                                                
                                    
x
+
TestCertExpiration (233.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-906495 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0316 17:37:48.671565  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-906495 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (44.402839805s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-906495 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-906495 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.482974284s)
helpers_test.go:175: Cleaning up "cert-expiration-906495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-906495
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-906495: (2.305998407s)
--- PASS: TestCertExpiration (233.19s)

                                                
                                    
x
+
TestForceSystemdFlag (43.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-641679 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-641679 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.272925839s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-641679 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-641679" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-641679
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-641679: (2.153435178s)
--- PASS: TestForceSystemdFlag (43.77s)

                                                
                                    
x
+
TestForceSystemdEnv (46.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-682017 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-682017 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.198697798s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-682017 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-682017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-682017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-682017: (2.044811017s)
--- PASS: TestForceSystemdEnv (46.55s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.07s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-559934 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-559934 --driver=docker  --container-runtime=containerd: (29.920451s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-559934"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-559934": (1.231243486s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-87CAX1Z1QmUP/agent.302702" SSH_AGENT_PID="302703" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-87CAX1Z1QmUP/agent.302702" SSH_AGENT_PID="302703" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-87CAX1Z1QmUP/agent.302702" SSH_AGENT_PID="302703" DOCKER_HOST=ssh://docker@127.0.0.1:33150 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (3.565776797s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-87CAX1Z1QmUP/agent.302702" SSH_AGENT_PID="302703" DOCKER_HOST=ssh://docker@127.0.0.1:33150 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-559934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-559934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-559934: (1.954961963s)
--- PASS: TestDockerEnvContainerd (48.07s)

                                                
                                    
x
+
TestErrorSpam/setup (31.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-041858 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-041858 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-041858 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-041858 --driver=docker  --container-runtime=containerd: (31.70060277s)
--- PASS: TestErrorSpam/setup (31.70s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 stop: (1.240633817s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-041858 --log_dir /tmp/nospam-041858 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18277-280225/.minikube/files/etc/test/nested/copy/285633/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0316 17:02:48.674037  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.680969  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.691225  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.711484  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.751806  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.832080  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:48.992562  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:49.313126  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:49.953522  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:51.233970  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:02:53.794684  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-193375 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (57.382501037s)
--- PASS: TestFunctional/serial/StartWithProxy (57.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --alsologtostderr -v=8
E0316 17:02:58.915511  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-193375 --alsologtostderr -v=8: (5.998063411s)
functional_test.go:659: soft start took 6.001489541s for "functional-193375" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-193375 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:3.1: (1.440001178s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:3.3: (1.287602981s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 cache add registry.k8s.io/pause:latest: (1.353234687s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-193375 /tmp/TestFunctionalserialCacheCmdcacheadd_local612522316/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache add minikube-local-cache-test:functional-193375
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache delete minikube-local-cache-test:functional-193375
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-193375
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.306516ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cache reload
E0316 17:03:09.155758  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 cache reload: (1.174628192s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 kubectl -- --context functional-193375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-193375 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0316 17:03:29.636050  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-193375 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.01054799s)
functional_test.go:757: restart took 47.010647713s for "functional-193375" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-193375 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 logs: (1.685414452s)
--- PASS: TestFunctional/serial/LogsCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 logs --file /tmp/TestFunctionalserialLogsFileCmd932371006/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 logs --file /tmp/TestFunctionalserialLogsFileCmd932371006/001/logs.txt: (2.083411259s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-193375 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-193375
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-193375: exit status 115 (635.577816ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30761 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-193375 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 config get cpus: exit status 14 (92.56615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 config get cpus: exit status 14 (79.36826ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-193375 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-193375 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 316851: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-193375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (235.491129ms)

                                                
                                                
-- stdout --
	* [functional-193375] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:04:39.589578  316508 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:04:39.589822  316508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:04:39.589870  316508 out.go:304] Setting ErrFile to fd 2...
	I0316 17:04:39.589889  316508 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:04:39.590199  316508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:04:39.590603  316508 out.go:298] Setting JSON to false
	I0316 17:04:39.591821  316508 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10026,"bootTime":1710598654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 17:04:39.591924  316508 start.go:139] virtualization:  
	I0316 17:04:39.594271  316508 out.go:177] * [functional-193375] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 17:04:39.597178  316508 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:04:39.597205  316508 notify.go:220] Checking for updates...
	I0316 17:04:39.599706  316508 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:04:39.601450  316508 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:04:39.603129  316508 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 17:04:39.604899  316508 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 17:04:39.606648  316508 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:04:39.608622  316508 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:04:39.609261  316508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:04:39.632778  316508 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 17:04:39.632947  316508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:04:39.738142  316508 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-16 17:04:39.72863546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:04:39.738317  316508 docker.go:295] overlay module found
	I0316 17:04:39.742858  316508 out.go:177] * Using the docker driver based on existing profile
	I0316 17:04:39.744710  316508 start.go:297] selected driver: docker
	I0316 17:04:39.744762  316508 start.go:901] validating driver "docker" against &{Name:functional-193375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-193375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:04:39.744897  316508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:04:39.747910  316508 out.go:177] 
	W0316 17:04:39.750488  316508 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0316 17:04:39.752278  316508 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-193375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-193375 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (253.604404ms)

                                                
                                                
-- stdout --
	* [functional-193375] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:04:39.381663  316465 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:04:39.381832  316465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:04:39.381862  316465 out.go:304] Setting ErrFile to fd 2...
	I0316 17:04:39.381882  316465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:04:39.382939  316465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:04:39.383557  316465 out.go:298] Setting JSON to false
	I0316 17:04:39.384718  316465 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10026,"bootTime":1710598654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 17:04:39.384822  316465 start.go:139] virtualization:  
	I0316 17:04:39.387198  316465 out.go:177] * [functional-193375] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0316 17:04:39.389700  316465 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:04:39.389740  316465 notify.go:220] Checking for updates...
	I0316 17:04:39.392056  316465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:04:39.394907  316465 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:04:39.396621  316465 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 17:04:39.398186  316465 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 17:04:39.399759  316465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:04:39.402231  316465 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:04:39.402754  316465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:04:39.424735  316465 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 17:04:39.424846  316465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:04:39.504758  316465 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-16 17:04:39.494667235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:04:39.504869  316465 docker.go:295] overlay module found
	I0316 17:04:39.507087  316465 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0316 17:04:39.508656  316465 start.go:297] selected driver: docker
	I0316 17:04:39.508673  316465 start.go:901] validating driver "docker" against &{Name:functional-193375 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-193375 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:04:39.508802  316465 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:04:39.511347  316465 out.go:177] 
	W0316 17:04:39.513127  316465 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0316 17:04:39.514656  316465 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-193375 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-193375 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-92tw5" [be354f85-fc33-4cc6-b1ab-2e68364d888f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-92tw5" [be354f85-fc33-4cc6-b1ab-2e68364d888f] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004241465s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30463
functional_test.go:1671: http://192.168.49.2:30463: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-92tw5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30463
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [951133b5-037a-46e2-8d12-0da703f62f13] Running
E0316 17:04:10.596817  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006974619s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-193375 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-193375 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-193375 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de27ce0f-d24a-40b9-adc2-caf07651e292] Pending
helpers_test.go:344: "sp-pod" [de27ce0f-d24a-40b9-adc2-caf07651e292] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de27ce0f-d24a-40b9-adc2-caf07651e292] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004210453s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-193375 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-193375 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-193375 delete -f testdata/storage-provisioner/pod.yaml: (1.210972207s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-193375 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e12d1404-07d8-4ef1-8872-27bc20ac0f5a] Pending
helpers_test.go:344: "sp-pod" [e12d1404-07d8-4ef1-8872-27bc20ac0f5a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e12d1404-07d8-4ef1-8872-27bc20ac0f5a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003791273s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-193375 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh -n functional-193375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cp functional-193375:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3883214989/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh -n functional-193375 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh -n functional-193375 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/285633/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /etc/test/nested/copy/285633/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/285633.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /etc/ssl/certs/285633.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/285633.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /usr/share/ca-certificates/285633.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/2856332.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /etc/ssl/certs/2856332.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/2856332.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /usr/share/ca-certificates/2856332.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-193375 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "sudo systemctl is-active docker": exit status 1 (377.945466ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "sudo systemctl is-active crio": exit status 1 (366.138641ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 314382: os: process already finished
helpers_test.go:502: unable to terminate pid 314216: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-193375 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a89ec8d4-3a5c-468c-9ab3-f7acb47e0fb6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a89ec8d4-3a5c-468c-9ab3-f7acb47e0fb6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004586429s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-193375 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.168.66 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-193375 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-193375 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-193375 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-txj5z" [79a7eadc-2107-4634-946b-ba5b564d7407] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-txj5z" [79a7eadc-2107-4634-946b-ba5b564d7407] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.007617711s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "527.167546ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "75.557698ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service list -o json
functional_test.go:1490: Took "671.206577ms" to run "out/minikube-linux-arm64 -p functional-193375 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "412.327585ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "90.020467ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31102
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdany-port4144250685/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710608676810256136" to /tmp/TestFunctionalparallelMountCmdany-port4144250685/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710608676810256136" to /tmp/TestFunctionalparallelMountCmdany-port4144250685/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710608676810256136" to /tmp/TestFunctionalparallelMountCmdany-port4144250685/001/test-1710608676810256136
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.256893ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 16 17:04 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 16 17:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 16 17:04 test-1710608676810256136
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh cat /mount-9p/test-1710608676810256136
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-193375 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [24acfc84-9e65-4ab5-9abf-87db01135327] Pending
helpers_test.go:344: "busybox-mount" [24acfc84-9e65-4ab5-9abf-87db01135327] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [24acfc84-9e65-4ab5-9abf-87db01135327] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [24acfc84-9e65-4ab5-9abf-87db01135327] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007853676s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-193375 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdany-port4144250685/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31102
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdspecific-port88871721/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (641.751215ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdspecific-port88871721/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "sudo umount -f /mount-9p": exit status 1 (485.613528ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-193375 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdspecific-port88871721/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T" /mount1: exit status 1 (656.003745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2024/03/16 17:04:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-193375 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-193375 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2441453329/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 version -o=json --components: (1.324838609s)
--- PASS: TestFunctional/parallel/Version/components (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-193375 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-193375
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-193375 image ls --format short --alsologtostderr:
I0316 17:05:07.346968  319034 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:07.347218  319034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.347242  319034 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:07.347259  319034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.347502  319034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
I0316 17:05:07.348183  319034 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.348376  319034 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.348882  319034 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
I0316 17:05:07.368568  319034 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:07.368631  319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
I0316 17:05:07.385988  319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
I0316 17:05:07.480063  319034 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-193375 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-193375  | sha256:efcbd3 | 1.01kB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-193375 image ls --format table --alsologtostderr:
I0316 17:05:07.614300  319089 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:07.614474  319089 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.614486  319089 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:07.614492  319089 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.614737  319089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
I0316 17:05:07.615327  319089 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.615454  319089 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.615948  319089 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
I0316 17:05:07.638331  319089 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:07.638392  319089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
I0316 17:05:07.662796  319089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
I0316 17:05:07.764713  319089 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-193375 image ls --format json --alsologtostderr:
[{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:efcbd3663020a9d845f8dd0ce4d81698853f0bf694c2baf8a90810b87f2d2b9e",
"repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-193375"],"size":"1006"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:0fefd803183ec3a8010fa9b2dab6c3a8445642f013a7b5f29e12b8634f67bd22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8
s.io/pause:latest"],"size":"71300"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:1611cd
07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[
],"size":"18306114"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-193375 image ls --format json --alsologtostderr:
I0316 17:05:07.352418  319035 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:07.352609  319035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.352634  319035 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:07.352683  319035 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.352980  319035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
I0316 17:05:07.353942  319035 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.354194  319035 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.354736  319035 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
I0316 17:05:07.374083  319035 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:07.374159  319035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
I0316 17:05:07.401704  319035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
I0316 17:05:07.504317  319035 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-193375 image ls --format yaml --alsologtostderr:
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:efcbd3663020a9d845f8dd0ce4d81698853f0bf694c2baf8a90810b87f2d2b9e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-193375
size: "1006"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:0fefd803183ec3a8010fa9b2dab6c3a8445642f013a7b5f29e12b8634f67bd22
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-193375 image ls --format yaml --alsologtostderr:
I0316 17:05:07.895989  319167 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:07.896163  319167 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.896169  319167 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:07.896174  319167 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:07.896428  319167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
I0316 17:05:07.897038  319167 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.897206  319167 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:07.897759  319167 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
I0316 17:05:07.921082  319167 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:07.921146  319167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
I0316 17:05:07.950970  319167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
I0316 17:05:08.052797  319167 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-193375 ssh pgrep buildkitd: exit status 1 (311.346799ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image build -t localhost/my-image:functional-193375 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-193375 image build -t localhost/my-image:functional-193375 testdata/build --alsologtostderr: (2.19333674s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-193375 image build -t localhost/my-image:functional-193375 testdata/build --alsologtostderr:
I0316 17:05:08.001426  319178 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:08.001993  319178 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:08.002031  319178 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:08.002037  319178 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:08.002355  319178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
I0316 17:05:08.003162  319178 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:08.004953  319178 config.go:182] Loaded profile config "functional-193375": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:08.005701  319178 cli_runner.go:164] Run: docker container inspect functional-193375 --format={{.State.Status}}
I0316 17:05:08.023512  319178 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:08.023581  319178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-193375
I0316 17:05:08.040777  319178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/functional-193375/id_rsa Username:docker}
I0316 17:05:08.139881  319178 build_images.go:161] Building image from path: /tmp/build.4210777115.tar
I0316 17:05:08.139955  319178 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0316 17:05:08.148738  319178 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4210777115.tar
I0316 17:05:08.152368  319178 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4210777115.tar: stat -c "%s %y" /var/lib/minikube/build/build.4210777115.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4210777115.tar': No such file or directory
I0316 17:05:08.152401  319178 ssh_runner.go:362] scp /tmp/build.4210777115.tar --> /var/lib/minikube/build/build.4210777115.tar (3072 bytes)
I0316 17:05:08.177783  319178 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4210777115
I0316 17:05:08.186743  319178 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4210777115 -xf /var/lib/minikube/build/build.4210777115.tar
I0316 17:05:08.195955  319178 containerd.go:379] Building image: /var/lib/minikube/build/build.4210777115
I0316 17:05:08.196033  319178 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4210777115 --local dockerfile=/var/lib/minikube/build/build.4210777115 --output type=image,name=localhost/my-image:functional-193375
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a11fb0ab8c0e9e0d641eaefdbbd91f960d60756201157ec39cfe92c46b651721
#8 exporting manifest sha256:a11fb0ab8c0e9e0d641eaefdbbd91f960d60756201157ec39cfe92c46b651721 0.0s done
#8 exporting config sha256:ac9cdd235553a29bce86f2d12eaf8d1df7141f75a3a2def87d1b282e609469a1 0.0s done
#8 naming to localhost/my-image:functional-193375 done
#8 DONE 0.1s
I0316 17:05:10.062800  319178 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4210777115 --local dockerfile=/var/lib/minikube/build/build.4210777115 --output type=image,name=localhost/my-image:functional-193375: (1.866731747s)
I0316 17:05:10.062901  319178 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4210777115
I0316 17:05:10.074588  319178 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4210777115.tar
I0316 17:05:10.085908  319178 build_images.go:217] Built localhost/my-image:functional-193375 from /tmp/build.4210777115.tar
I0316 17:05:10.085940  319178 build_images.go:133] succeeded building to: functional-193375
I0316 17:05:10.085945  319178 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.494384922s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-193375
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image rm gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-193375
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-193375 image save --daemon gcr.io/google-containers/addon-resizer:functional-193375 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-193375
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-193375
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-193375
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-193375
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-335733 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0316 17:05:32.519734  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-335733 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.516770743s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- rollout status deployment/busybox
E0316 17:07:48.671992  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-335733 -- rollout status deployment/busybox: (28.323509395s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-98wnn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-9wc42 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-swqqq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-98wnn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-9wc42 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-swqqq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-98wnn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-9wc42 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-swqqq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-98wnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-98wnn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-9wc42 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-9wc42 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-swqqq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-335733 -- exec busybox-5b5d89c9d6-swqqq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-335733 -v=7 --alsologtostderr
E0316 17:08:16.360109  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-335733 -v=7 --alsologtostderr: (22.837163317s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr: (1.037729179s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-335733 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 status --output json -v=7 --alsologtostderr: (1.019531653s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp testdata/cp-test.txt ha-335733:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2831573119/001/cp-test_ha-335733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733:/home/docker/cp-test.txt ha-335733-m02:/home/docker/cp-test_ha-335733_ha-335733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test_ha-335733_ha-335733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733:/home/docker/cp-test.txt ha-335733-m03:/home/docker/cp-test_ha-335733_ha-335733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test_ha-335733_ha-335733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733:/home/docker/cp-test.txt ha-335733-m04:/home/docker/cp-test_ha-335733_ha-335733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test_ha-335733_ha-335733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp testdata/cp-test.txt ha-335733-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2831573119/001/cp-test_ha-335733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m02:/home/docker/cp-test.txt ha-335733:/home/docker/cp-test_ha-335733-m02_ha-335733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test_ha-335733-m02_ha-335733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m02:/home/docker/cp-test.txt ha-335733-m03:/home/docker/cp-test_ha-335733-m02_ha-335733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test_ha-335733-m02_ha-335733-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m02:/home/docker/cp-test.txt ha-335733-m04:/home/docker/cp-test_ha-335733-m02_ha-335733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test_ha-335733-m02_ha-335733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp testdata/cp-test.txt ha-335733-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2831573119/001/cp-test_ha-335733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m03:/home/docker/cp-test.txt ha-335733:/home/docker/cp-test_ha-335733-m03_ha-335733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test_ha-335733-m03_ha-335733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m03:/home/docker/cp-test.txt ha-335733-m02:/home/docker/cp-test_ha-335733-m03_ha-335733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test_ha-335733-m03_ha-335733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m03:/home/docker/cp-test.txt ha-335733-m04:/home/docker/cp-test_ha-335733-m03_ha-335733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test_ha-335733-m03_ha-335733-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp testdata/cp-test.txt ha-335733-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2831573119/001/cp-test_ha-335733-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m04:/home/docker/cp-test.txt ha-335733:/home/docker/cp-test_ha-335733-m04_ha-335733.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733 "sudo cat /home/docker/cp-test_ha-335733-m04_ha-335733.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m04:/home/docker/cp-test.txt ha-335733-m02:/home/docker/cp-test_ha-335733-m04_ha-335733-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m02 "sudo cat /home/docker/cp-test_ha-335733-m04_ha-335733-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 cp ha-335733-m04:/home/docker/cp-test.txt ha-335733-m03:/home/docker/cp-test_ha-335733-m04_ha-335733-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 ssh -n ha-335733-m03 "sudo cat /home/docker/cp-test_ha-335733-m04_ha-335733-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 node stop m02 -v=7 --alsologtostderr: (12.119772839s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr: exit status 7 (759.627749ms)

                                                
                                                
-- stdout --
	ha-335733
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335733-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335733-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:08:56.911772  334541 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:08:56.911920  334541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:08:56.911929  334541 out.go:304] Setting ErrFile to fd 2...
	I0316 17:08:56.911934  334541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:08:56.912165  334541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:08:56.912880  334541 out.go:298] Setting JSON to false
	I0316 17:08:56.912947  334541 mustload.go:65] Loading cluster: ha-335733
	I0316 17:08:56.913028  334541 notify.go:220] Checking for updates...
	I0316 17:08:56.913371  334541 config.go:182] Loaded profile config "ha-335733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:08:56.913392  334541 status.go:255] checking status of ha-335733 ...
	I0316 17:08:56.913908  334541 cli_runner.go:164] Run: docker container inspect ha-335733 --format={{.State.Status}}
	I0316 17:08:56.934065  334541 status.go:330] ha-335733 host status = "Running" (err=<nil>)
	I0316 17:08:56.934089  334541 host.go:66] Checking if "ha-335733" exists ...
	I0316 17:08:56.934418  334541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335733
	I0316 17:08:56.952290  334541 host.go:66] Checking if "ha-335733" exists ...
	I0316 17:08:56.952588  334541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:08:56.952651  334541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335733
	I0316 17:08:56.972753  334541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/ha-335733/id_rsa Username:docker}
	I0316 17:08:57.074893  334541 ssh_runner.go:195] Run: systemctl --version
	I0316 17:08:57.080019  334541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:08:57.092786  334541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:08:57.160326  334541 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-16 17:08:57.151048302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:08:57.160934  334541 kubeconfig.go:125] found "ha-335733" server: "https://192.168.49.254:8443"
	I0316 17:08:57.160960  334541 api_server.go:166] Checking apiserver status ...
	I0316 17:08:57.161002  334541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:08:57.173186  334541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	I0316 17:08:57.182689  334541 api_server.go:182] apiserver freezer: "7:freezer:/docker/cdf49fee391237f32b05ccb11c8739dd0e715c7634458301da0e90188a65d971/kubepods/burstable/pod9c2cbe63e45606f2da20b143f04b66cc/9590cf3147af9cf2149c3ba85e936b8eaba09b44543ded4b1d8e320e4c9a06da"
	I0316 17:08:57.182771  334541 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cdf49fee391237f32b05ccb11c8739dd0e715c7634458301da0e90188a65d971/kubepods/burstable/pod9c2cbe63e45606f2da20b143f04b66cc/9590cf3147af9cf2149c3ba85e936b8eaba09b44543ded4b1d8e320e4c9a06da/freezer.state
	I0316 17:08:57.191873  334541 api_server.go:204] freezer state: "THAWED"
	I0316 17:08:57.191909  334541 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0316 17:08:57.200938  334541 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0316 17:08:57.200966  334541 status.go:422] ha-335733 apiserver status = Running (err=<nil>)
	I0316 17:08:57.200977  334541 status.go:257] ha-335733 status: &{Name:ha-335733 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:08:57.201000  334541 status.go:255] checking status of ha-335733-m02 ...
	I0316 17:08:57.201319  334541 cli_runner.go:164] Run: docker container inspect ha-335733-m02 --format={{.State.Status}}
	I0316 17:08:57.219640  334541 status.go:330] ha-335733-m02 host status = "Stopped" (err=<nil>)
	I0316 17:08:57.219663  334541 status.go:343] host is not running, skipping remaining checks
	I0316 17:08:57.219670  334541 status.go:257] ha-335733-m02 status: &{Name:ha-335733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:08:57.219691  334541 status.go:255] checking status of ha-335733-m03 ...
	I0316 17:08:57.220002  334541 cli_runner.go:164] Run: docker container inspect ha-335733-m03 --format={{.State.Status}}
	I0316 17:08:57.239723  334541 status.go:330] ha-335733-m03 host status = "Running" (err=<nil>)
	I0316 17:08:57.239748  334541 host.go:66] Checking if "ha-335733-m03" exists ...
	I0316 17:08:57.240045  334541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335733-m03
	I0316 17:08:57.259559  334541 host.go:66] Checking if "ha-335733-m03" exists ...
	I0316 17:08:57.260009  334541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:08:57.260065  334541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335733-m03
	I0316 17:08:57.279163  334541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/ha-335733-m03/id_rsa Username:docker}
	I0316 17:08:57.376619  334541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:08:57.389246  334541 kubeconfig.go:125] found "ha-335733" server: "https://192.168.49.254:8443"
	I0316 17:08:57.389273  334541 api_server.go:166] Checking apiserver status ...
	I0316 17:08:57.389314  334541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:08:57.399795  334541 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	I0316 17:08:57.409288  334541 api_server.go:182] apiserver freezer: "7:freezer:/docker/39d4eaa8c7b88278c4db1e9749f35b2f0315886a4e81f096c5deaa4fdf6a3e58/kubepods/burstable/pod69c52284ffff55b1e3bde21934162a4f/3213cf937dc5ddc3a0cf5d1f4c25d0e0ac78224de409a58ac2a903a19c7fde26"
	I0316 17:08:57.409357  334541 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/39d4eaa8c7b88278c4db1e9749f35b2f0315886a4e81f096c5deaa4fdf6a3e58/kubepods/burstable/pod69c52284ffff55b1e3bde21934162a4f/3213cf937dc5ddc3a0cf5d1f4c25d0e0ac78224de409a58ac2a903a19c7fde26/freezer.state
	I0316 17:08:57.418907  334541 api_server.go:204] freezer state: "THAWED"
	I0316 17:08:57.418944  334541 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0316 17:08:57.427772  334541 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0316 17:08:57.427803  334541 status.go:422] ha-335733-m03 apiserver status = Running (err=<nil>)
	I0316 17:08:57.427814  334541 status.go:257] ha-335733-m03 status: &{Name:ha-335733-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:08:57.427860  334541 status.go:255] checking status of ha-335733-m04 ...
	I0316 17:08:57.428188  334541 cli_runner.go:164] Run: docker container inspect ha-335733-m04 --format={{.State.Status}}
	I0316 17:08:57.444307  334541 status.go:330] ha-335733-m04 host status = "Running" (err=<nil>)
	I0316 17:08:57.444350  334541 host.go:66] Checking if "ha-335733-m04" exists ...
	I0316 17:08:57.444698  334541 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-335733-m04
	I0316 17:08:57.465378  334541 host.go:66] Checking if "ha-335733-m04" exists ...
	I0316 17:08:57.465684  334541 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:08:57.465726  334541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-335733-m04
	I0316 17:08:57.482509  334541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33180 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/ha-335733-m04/id_rsa Username:docker}
	I0316 17:08:57.576972  334541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:08:57.588644  334541 status.go:257] ha-335733-m04 status: &{Name:ha-335733-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 node start m02 -v=7 --alsologtostderr
E0316 17:09:08.210682  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.216338  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.226570  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.247669  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.288440  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.369194  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.529304  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:08.850350  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:09.491680  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:10.772004  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:13.333034  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 node start m02 -v=7 --alsologtostderr: (17.440104417s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr: (1.018685836s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-335733 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-335733 -v=7 --alsologtostderr
E0316 17:09:18.453250  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:28.694272  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:09:49.174542  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-335733 -v=7 --alsologtostderr: (37.385063224s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-335733 --wait=true -v=7 --alsologtostderr
E0316 17:10:30.135583  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-335733 --wait=true -v=7 --alsologtostderr: (1m1.947982544s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-335733
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 node delete m03 -v=7 --alsologtostderr: (10.374885937s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 stop -v=7 --alsologtostderr: (25.140289258s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr: exit status 7 (129.200774ms)

                                                
                                                
-- stdout --
	ha-335733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335733-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335733-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:11:34.114006  347412 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:11:34.114176  347412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:11:34.114186  347412 out.go:304] Setting ErrFile to fd 2...
	I0316 17:11:34.114191  347412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:11:34.114440  347412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:11:34.114619  347412 out.go:298] Setting JSON to false
	I0316 17:11:34.114651  347412 mustload.go:65] Loading cluster: ha-335733
	I0316 17:11:34.114761  347412 notify.go:220] Checking for updates...
	I0316 17:11:34.115068  347412 config.go:182] Loaded profile config "ha-335733": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:11:34.115079  347412 status.go:255] checking status of ha-335733 ...
	I0316 17:11:34.115582  347412 cli_runner.go:164] Run: docker container inspect ha-335733 --format={{.State.Status}}
	I0316 17:11:34.134699  347412 status.go:330] ha-335733 host status = "Stopped" (err=<nil>)
	I0316 17:11:34.134722  347412 status.go:343] host is not running, skipping remaining checks
	I0316 17:11:34.134730  347412 status.go:257] ha-335733 status: &{Name:ha-335733 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:11:34.134759  347412 status.go:255] checking status of ha-335733-m02 ...
	I0316 17:11:34.135058  347412 cli_runner.go:164] Run: docker container inspect ha-335733-m02 --format={{.State.Status}}
	I0316 17:11:34.154256  347412 status.go:330] ha-335733-m02 host status = "Stopped" (err=<nil>)
	I0316 17:11:34.154281  347412 status.go:343] host is not running, skipping remaining checks
	I0316 17:11:34.154289  347412 status.go:257] ha-335733-m02 status: &{Name:ha-335733-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:11:34.154332  347412 status.go:255] checking status of ha-335733-m04 ...
	I0316 17:11:34.154615  347412 cli_runner.go:164] Run: docker container inspect ha-335733-m04 --format={{.State.Status}}
	I0316 17:11:34.180350  347412 status.go:330] ha-335733-m04 host status = "Stopped" (err=<nil>)
	I0316 17:11:34.180401  347412 status.go:343] host is not running, skipping remaining checks
	I0316 17:11:34.180428  347412 status.go:257] ha-335733-m04 status: &{Name:ha-335733-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-335733 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0316 17:11:52.056633  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:12:48.671939  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-335733 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.473648353s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-335733 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-335733 --control-plane -v=7 --alsologtostderr: (44.299399929s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-335733 status -v=7 --alsologtostderr: (1.058348125s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.81s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-222053 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0316 17:14:08.211067  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:14:35.896850  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-222053 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (55.668873608s)
--- PASS: TestJSONOutput/start/Command (55.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-222053 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-222053 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.82s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-222053 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-222053 --output=json --user=testUser: (5.82267727s)
--- PASS: TestJSONOutput/stop/Command (5.82s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-472543 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-472543 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.950105ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0159aab7-5aa8-4345-ab80-05b37dd3f4bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-472543] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"38905bcd-7cab-4730-b522-c1b40be5c555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18277"}}
	{"specversion":"1.0","id":"6f429ad9-d2c6-41a4-99ab-0d2616dc136c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"934827d0-cb34-43f7-9f86-1584014cfeb5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig"}}
	{"specversion":"1.0","id":"d2303164-6169-4d9a-b8f4-205f510d8947","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube"}}
	{"specversion":"1.0","id":"d88cffaf-554b-4e3d-8352-8677cf74348a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1a7c0fa5-f697-4a49-b60c-322e63ac99c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6f823fd8-199f-4851-911a-69abf4e41d0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-472543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-472543
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-417977 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-417977 --network=: (40.136636386s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-417977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-417977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-417977: (2.142044853s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-538577 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-538577 --network=bridge: (35.156838283s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-538577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-538577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-538577: (2.003493894s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.18s)

                                                
                                    
x
+
TestKicExistingNetwork (34.28s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-514723 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-514723 --network=existing-network: (32.072832844s)
helpers_test.go:175: Cleaning up "existing-network-514723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-514723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-514723: (2.049585077s)
--- PASS: TestKicExistingNetwork (34.28s)

                                                
                                    
x
+
TestKicCustomSubnet (34.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-221779 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-221779 --subnet=192.168.60.0/24: (32.410799599s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-221779 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-221779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-221779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-221779: (2.128995847s)
--- PASS: TestKicCustomSubnet (34.56s)

                                                
                                    
x
+
TestKicStaticIP (33.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-435400 --static-ip=192.168.200.200
E0316 17:17:48.671664  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-435400 --static-ip=192.168.200.200: (31.420005224s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-435400 ip
helpers_test.go:175: Cleaning up "static-ip-435400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-435400
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-435400: (2.099063323s)
--- PASS: TestKicStaticIP (33.66s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-821363 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-821363 --driver=docker  --container-runtime=containerd: (33.459028294s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-823817 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-823817 --driver=docker  --container-runtime=containerd: (36.349331518s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-821363
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-823817
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-823817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-823817
E0316 17:19:08.211545  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-823817: (1.967497103s)
helpers_test.go:175: Cleaning up "first-821363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-821363
E0316 17:19:11.720328  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-821363: (2.252083396s)
--- PASS: TestMinikubeProfile (75.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-202497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-202497 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.382550078s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-202497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-216569 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-216569 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.315778599s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-216569 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-202497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-202497 --alsologtostderr -v=5: (1.605992852s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-216569 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-216569
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-216569: (1.193328387s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-216569
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-216569: (7.372216229s)
--- PASS: TestMountStart/serial/RestartStopped (8.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-216569 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (74.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-649793 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-649793 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.235566468s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (74.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-649793 -- rollout status deployment/busybox: (2.352719335s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-bkdzv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-w8g9m -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-bkdzv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-w8g9m -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-bkdzv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-w8g9m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-bkdzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-bkdzv -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-w8g9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-649793 -- exec busybox-5b5d89c9d6-w8g9m -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.30s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-649793 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-649793 -v 3 --alsologtostderr: (18.087832597s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.76s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-649793 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp testdata/cp-test.txt multinode-649793:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2911297649/001/cp-test_multinode-649793.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793:/home/docker/cp-test.txt multinode-649793-m02:/home/docker/cp-test_multinode-649793_multinode-649793-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test_multinode-649793_multinode-649793-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793:/home/docker/cp-test.txt multinode-649793-m03:/home/docker/cp-test_multinode-649793_multinode-649793-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test_multinode-649793_multinode-649793-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp testdata/cp-test.txt multinode-649793-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2911297649/001/cp-test_multinode-649793-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m02:/home/docker/cp-test.txt multinode-649793:/home/docker/cp-test_multinode-649793-m02_multinode-649793.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test_multinode-649793-m02_multinode-649793.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m02:/home/docker/cp-test.txt multinode-649793-m03:/home/docker/cp-test_multinode-649793-m02_multinode-649793-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test_multinode-649793-m02_multinode-649793-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp testdata/cp-test.txt multinode-649793-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2911297649/001/cp-test_multinode-649793-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m03:/home/docker/cp-test.txt multinode-649793:/home/docker/cp-test_multinode-649793-m03_multinode-649793.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793 "sudo cat /home/docker/cp-test_multinode-649793-m03_multinode-649793.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 cp multinode-649793-m03:/home/docker/cp-test.txt multinode-649793-m02:/home/docker/cp-test_multinode-649793-m03_multinode-649793-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 ssh -n multinode-649793-m02 "sudo cat /home/docker/cp-test_multinode-649793-m03_multinode-649793-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-649793 node stop m03: (1.235601677s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-649793 status: exit status 7 (526.260775ms)

                                                
                                                
-- stdout --
	multinode-649793
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-649793-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-649793-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr: exit status 7 (533.854256ms)

                                                
                                                
-- stdout --
	multinode-649793
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-649793-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-649793-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:21:31.941624  398961 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:21:31.941856  398961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:21:31.941883  398961 out.go:304] Setting ErrFile to fd 2...
	I0316 17:21:31.941901  398961 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:21:31.942316  398961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:21:31.942934  398961 out.go:298] Setting JSON to false
	I0316 17:21:31.943477  398961 mustload.go:65] Loading cluster: multinode-649793
	I0316 17:21:31.944269  398961 config.go:182] Loaded profile config "multinode-649793": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:21:31.944589  398961 notify.go:220] Checking for updates...
	I0316 17:21:31.944596  398961 status.go:255] checking status of multinode-649793 ...
	I0316 17:21:31.945221  398961 cli_runner.go:164] Run: docker container inspect multinode-649793 --format={{.State.Status}}
	I0316 17:21:31.963744  398961 status.go:330] multinode-649793 host status = "Running" (err=<nil>)
	I0316 17:21:31.963767  398961 host.go:66] Checking if "multinode-649793" exists ...
	I0316 17:21:31.964087  398961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-649793
	I0316 17:21:31.980848  398961 host.go:66] Checking if "multinode-649793" exists ...
	I0316 17:21:31.981171  398961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:21:31.981240  398961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-649793
	I0316 17:21:32.007819  398961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/multinode-649793/id_rsa Username:docker}
	I0316 17:21:32.105662  398961 ssh_runner.go:195] Run: systemctl --version
	I0316 17:21:32.110233  398961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:21:32.122579  398961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:21:32.174116  398961 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-16 17:21:32.163690361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:21:32.174729  398961 kubeconfig.go:125] found "multinode-649793" server: "https://192.168.58.2:8443"
	I0316 17:21:32.174753  398961 api_server.go:166] Checking apiserver status ...
	I0316 17:21:32.174804  398961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:21:32.186837  398961 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I0316 17:21:32.197428  398961 api_server.go:182] apiserver freezer: "7:freezer:/docker/c28077b8bfaa9eda87ecbc2607f70b6a7621c7be8e5cc1907ac15a83f82507c1/kubepods/burstable/pod68c289cf0e3c2807fe788c71e7f37dc8/b434ef9dd7595d24a7cfcd681567f9eec28771ac7e5f69ed51e4ff26e3b23b0e"
	I0316 17:21:32.197505  398961 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c28077b8bfaa9eda87ecbc2607f70b6a7621c7be8e5cc1907ac15a83f82507c1/kubepods/burstable/pod68c289cf0e3c2807fe788c71e7f37dc8/b434ef9dd7595d24a7cfcd681567f9eec28771ac7e5f69ed51e4ff26e3b23b0e/freezer.state
	I0316 17:21:32.206929  398961 api_server.go:204] freezer state: "THAWED"
	I0316 17:21:32.206953  398961 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0316 17:21:32.215368  398961 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0316 17:21:32.215397  398961 status.go:422] multinode-649793 apiserver status = Running (err=<nil>)
	I0316 17:21:32.215409  398961 status.go:257] multinode-649793 status: &{Name:multinode-649793 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:21:32.215451  398961 status.go:255] checking status of multinode-649793-m02 ...
	I0316 17:21:32.215815  398961 cli_runner.go:164] Run: docker container inspect multinode-649793-m02 --format={{.State.Status}}
	I0316 17:21:32.231276  398961 status.go:330] multinode-649793-m02 host status = "Running" (err=<nil>)
	I0316 17:21:32.231299  398961 host.go:66] Checking if "multinode-649793-m02" exists ...
	I0316 17:21:32.231591  398961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-649793-m02
	I0316 17:21:32.247131  398961 host.go:66] Checking if "multinode-649793-m02" exists ...
	I0316 17:21:32.247513  398961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:21:32.247556  398961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-649793-m02
	I0316 17:21:32.268622  398961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33290 SSHKeyPath:/home/jenkins/minikube-integration/18277-280225/.minikube/machines/multinode-649793-m02/id_rsa Username:docker}
	I0316 17:21:32.364467  398961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:21:32.378747  398961 status.go:257] multinode-649793-m02 status: &{Name:multinode-649793-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:21:32.378788  398961 status.go:255] checking status of multinode-649793-m03 ...
	I0316 17:21:32.379167  398961 cli_runner.go:164] Run: docker container inspect multinode-649793-m03 --format={{.State.Status}}
	I0316 17:21:32.398437  398961 status.go:330] multinode-649793-m03 host status = "Stopped" (err=<nil>)
	I0316 17:21:32.398490  398961 status.go:343] host is not running, skipping remaining checks
	I0316 17:21:32.398500  398961 status.go:257] multinode-649793-m03 status: &{Name:multinode-649793-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-649793 node start m03 -v=7 --alsologtostderr: (8.626264844s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (85.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-649793
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-649793
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-649793: (25.004780046s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-649793 --wait=true -v=8 --alsologtostderr
E0316 17:22:48.671830  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-649793 --wait=true -v=8 --alsologtostderr: (1m0.521452546s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-649793
--- PASS: TestMultiNode/serial/RestartKeepsNodes (85.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-649793 node delete m03: (4.718817329s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-649793 stop: (23.79070728s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-649793 status: exit status 7 (97.059957ms)

                                                
                                                
-- stdout --
	multinode-649793
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-649793-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr: exit status 7 (91.518875ms)

                                                
                                                
-- stdout --
	multinode-649793
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-649793-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:23:36.790237  406514 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:23:36.790378  406514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:23:36.790389  406514 out.go:304] Setting ErrFile to fd 2...
	I0316 17:23:36.790395  406514 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:23:36.790639  406514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:23:36.790815  406514 out.go:298] Setting JSON to false
	I0316 17:23:36.790856  406514 mustload.go:65] Loading cluster: multinode-649793
	I0316 17:23:36.790969  406514 notify.go:220] Checking for updates...
	I0316 17:23:36.791279  406514 config.go:182] Loaded profile config "multinode-649793": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:23:36.791297  406514 status.go:255] checking status of multinode-649793 ...
	I0316 17:23:36.791860  406514 cli_runner.go:164] Run: docker container inspect multinode-649793 --format={{.State.Status}}
	I0316 17:23:36.809597  406514 status.go:330] multinode-649793 host status = "Stopped" (err=<nil>)
	I0316 17:23:36.809621  406514 status.go:343] host is not running, skipping remaining checks
	I0316 17:23:36.809629  406514 status.go:257] multinode-649793 status: &{Name:multinode-649793 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:23:36.809662  406514 status.go:255] checking status of multinode-649793-m02 ...
	I0316 17:23:36.809967  406514 cli_runner.go:164] Run: docker container inspect multinode-649793-m02 --format={{.State.Status}}
	I0316 17:23:36.825875  406514 status.go:330] multinode-649793-m02 host status = "Stopped" (err=<nil>)
	I0316 17:23:36.825898  406514 status.go:343] host is not running, skipping remaining checks
	I0316 17:23:36.825906  406514 status.go:257] multinode-649793-m02 status: &{Name:multinode-649793-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-649793 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0316 17:24:08.210970  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-649793 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (55.24157462s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-649793 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-649793
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-649793-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-649793-m02 --driver=docker  --container-runtime=containerd: exit status 14 (78.968215ms)

                                                
                                                
-- stdout --
	* [multinode-649793-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-649793-m02' is duplicated with machine name 'multinode-649793-m02' in profile 'multinode-649793'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-649793-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-649793-m03 --driver=docker  --container-runtime=containerd: (31.524344327s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-649793
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-649793: exit status 80 (316.426673ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-649793 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-649793-m03 already exists in multinode-649793-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-649793-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-649793-m03: (1.980775501s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.96s)

                                                
                                    
x
+
TestPreload (118.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-966694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0316 17:25:31.257129  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-966694 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.416266455s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-966694 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-966694 image pull gcr.io/k8s-minikube/busybox: (1.318320636s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-966694
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-966694: (12.078963698s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-966694 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-966694 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.121730903s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-966694 image list
helpers_test.go:175: Cleaning up "test-preload-966694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-966694
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-966694: (2.399703543s)
--- PASS: TestPreload (118.66s)

                                                
                                    
x
+
TestScheduledStopUnix (111.06s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-535697 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-535697 --memory=2048 --driver=docker  --container-runtime=containerd: (34.061820259s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-535697 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-535697 -n scheduled-stop-535697
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-535697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-535697 --cancel-scheduled
E0316 17:27:48.672304  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-535697 -n scheduled-stop-535697
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-535697
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-535697 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-535697
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-535697: exit status 7 (106.182224ms)

                                                
                                                
-- stdout --
	scheduled-stop-535697
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-535697 -n scheduled-stop-535697
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-535697 -n scheduled-stop-535697: exit status 7 (84.018686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-535697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-535697
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-535697: (5.332297249s)
--- PASS: TestScheduledStopUnix (111.06s)

                                                
                                    
x
+
TestInsufficientStorage (10.04s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-627036 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-627036 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.578721125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c877c900-4ed6-4dc0-869c-a85b90a1a037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-627036] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"542fbf93-1844-4792-b9ef-9497713ed7c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18277"}}
	{"specversion":"1.0","id":"90d26fd0-64e7-48d4-a3d2-f215965d94cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"09e3000f-01e1-4c7b-b01f-b12e0b0fb152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig"}}
	{"specversion":"1.0","id":"8ee34084-06a4-4564-a98e-d44af463fced","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube"}}
	{"specversion":"1.0","id":"e3b7b895-8452-4ef3-80a6-4a9ab6a9782f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"83213990-c849-42f5-be00-845ebf7843d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d1ba5add-9036-4b4d-9246-85ac9693f385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"935c514d-b664-4be1-b24b-4990c89eef9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5c89fdb3-c508-4602-ae1d-4752af0612d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0624b7ef-3c7f-4102-814b-e53d58245e42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c7e19974-54e3-471f-9e59-a111668d5277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-627036\" primary control-plane node in \"insufficient-storage-627036\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e9e54d6-206f-4dab-88e1-62db3fcc1e7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f72f4ed3-765c-4f9f-b81d-00a5961bfa46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b9f7f2b-05fc-4d69-b701-eda919219e2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-627036 --output=json --layout=cluster
E0316 17:29:08.211311  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-627036 --output=json --layout=cluster: exit status 7 (289.5043ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-627036","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-627036","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 17:29:08.308507  424100 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-627036" does not appear in /home/jenkins/minikube-integration/18277-280225/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-627036 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-627036 --output=json --layout=cluster: exit status 7 (286.495024ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-627036","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-627036","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 17:29:08.597784  424152 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-627036" does not appear in /home/jenkins/minikube-integration/18277-280225/kubeconfig
	E0316 17:29:08.607381  424152 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/insufficient-storage-627036/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-627036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-627036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-627036: (1.886200779s)
--- PASS: TestInsufficientStorage (10.04s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3372235082 start -p running-upgrade-848646 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0316 17:35:51.720857  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3372235082 start -p running-upgrade-848646 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (34.18139048s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-848646 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-848646 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.50185995s)
helpers_test.go:175: Cleaning up "running-upgrade-848646" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-848646
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-848646: (3.071087053s)
--- PASS: TestRunningBinaryUpgrade (79.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m4.873789204s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-854208
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-854208: (1.343152258s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-854208 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-854208 status --format={{.Host}}: exit status 7 (133.027681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m10.841720969s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-854208 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (85.77812ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-854208] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-854208
	    minikube start -p kubernetes-upgrade-854208 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8542082 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-854208 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-854208 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (13.899874315s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-854208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-854208
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-854208: (2.423460689s)
--- PASS: TestKubernetesUpgrade (393.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.93s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3400493931 start -p missing-upgrade-925316 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3400493931 start -p missing-upgrade-925316 --memory=2200 --driver=docker  --container-runtime=containerd: (1m33.998384779s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-925316
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-925316: (10.620811856s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-925316
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-925316 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0316 17:32:48.677845  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-925316 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.706126317s)
helpers_test.go:175: Cleaning up "missing-upgrade-925316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-925316
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-925316: (2.259238175s)
--- PASS: TestMissingContainerUpgrade (169.93s)

                                                
                                    
x
+
TestPause/serial/Start (64.27s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-524765 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m4.266133845s)
--- PASS: TestPause/serial/Start (64.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (120.661007ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-015150] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-015150 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-015150 --driver=docker  --container-runtime=containerd: (45.24744614s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-015150 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --driver=docker  --container-runtime=containerd: (13.833727634s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-015150 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-015150 status -o json: exit status 2 (341.440472ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-015150","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-015150
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-015150: (1.938696888s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-015150 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.744925468s)
--- PASS: TestNoKubernetes/serial/Start (8.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-524765 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-524765 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.78088221s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-015150 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-015150 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.303424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-524765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-524765 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-524765 --output=json --layout=cluster: exit status 2 (436.354964ms)

                                                
                                                
-- stdout --
	{"Name":"pause-524765","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-524765","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-015150
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-015150: (1.37698138s)
--- PASS: TestNoKubernetes/serial/Stop (1.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-524765 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-524765 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-524765 --alsologtostderr -v=5: (1.144479934s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-015150 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-015150 --driver=docker  --container-runtime=containerd: (7.178287018s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.18s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-524765 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-524765 --alsologtostderr -v=5: (2.812324704s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-524765
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-524765: exit status 1 (14.947496ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-524765: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-015150 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-015150 "sudo systemctl is-active --quiet service kubelet": exit status 1 (392.935754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2135173619 start -p stopped-upgrade-373988 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2135173619 start -p stopped-upgrade-373988 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.934892616s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2135173619 -p stopped-upgrade-373988 stop
E0316 17:34:08.210730  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2135173619 -p stopped-upgrade-373988 stop: (19.923536005s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-373988 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-373988 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.165760822s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-373988
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-373988: (1.309966123s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-004165 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-004165 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (298.595291ms)

                                                
                                                
-- stdout --
	* [false-004165] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:37:12.760198  464632 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:37:12.765400  464632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:37:12.765446  464632 out.go:304] Setting ErrFile to fd 2...
	I0316 17:37:12.765466  464632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:37:12.765763  464632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-280225/.minikube/bin
	I0316 17:37:12.766253  464632 out.go:298] Setting JSON to false
	I0316 17:37:12.767344  464632 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11979,"bootTime":1710598654,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0316 17:37:12.767452  464632 start.go:139] virtualization:  
	I0316 17:37:12.770377  464632 out.go:177] * [false-004165] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0316 17:37:12.773343  464632 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:37:12.775205  464632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:37:12.773430  464632 notify.go:220] Checking for updates...
	I0316 17:37:12.777114  464632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-280225/kubeconfig
	I0316 17:37:12.780361  464632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-280225/.minikube
	I0316 17:37:12.782344  464632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0316 17:37:12.784439  464632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:37:12.786876  464632 config.go:182] Loaded profile config "force-systemd-flag-641679": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:37:12.786999  464632 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:37:12.825687  464632 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0316 17:37:12.825808  464632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0316 17:37:12.966761  464632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-16 17:37:12.95222682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0316 17:37:12.966891  464632 docker.go:295] overlay module found
	I0316 17:37:12.970792  464632 out.go:177] * Using the docker driver based on user configuration
	I0316 17:37:12.973002  464632 start.go:297] selected driver: docker
	I0316 17:37:12.973033  464632 start.go:901] validating driver "docker" against <nil>
	I0316 17:37:12.973052  464632 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:37:12.976228  464632 out.go:177] 
	W0316 17:37:12.978346  464632 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0316 17:37:12.980597  464632 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-004165 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-004165

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-004165"

                                                
                                                
----------------------- debugLogs end: false-004165 [took: 4.49525918s] --------------------------------
helpers_test.go:175: Cleaning up "false-004165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-004165
--- PASS: TestNetworkPlugins/group/false (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (168.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-746380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0316 17:39:08.211393  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-746380 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m48.494190943s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (168.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-308593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-308593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m19.748069574s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-746380 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2d9cc7c-df1c-41ea-84b7-70cbcf81d18f] Pending
helpers_test.go:344: "busybox" [c2d9cc7c-df1c-41ea-84b7-70cbcf81d18f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2d9cc7c-df1c-41ea-84b7-70cbcf81d18f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004274001s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-746380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-746380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-746380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.489753218s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-746380 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-746380 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-746380 --alsologtostderr -v=3: (12.412272939s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-746380 -n old-k8s-version-746380
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-746380 -n old-k8s-version-746380: exit status 7 (149.336945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-746380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-308593 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7600f0f7-033c-400b-883f-29df7144ee9b] Pending
helpers_test.go:344: "busybox" [7600f0f7-033c-400b-883f-29df7144ee9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7600f0f7-033c-400b-883f-29df7144ee9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004550304s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-308593 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-308593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-308593 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.266920493s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-308593 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-308593 --alsologtostderr -v=3
E0316 17:42:48.671854  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-308593 --alsologtostderr -v=3: (12.857984356s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-308593 -n no-preload-308593
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-308593 -n no-preload-308593: exit status 7 (86.033651ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-308593 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-308593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0316 17:44:08.211555  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-308593 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m26.935967574s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-308593 -n no-preload-308593
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67xrf" [2e02f867-8fbe-4162-b6bb-3da5de20118d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004460156s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-67xrf" [2e02f867-8fbe-4162-b6bb-3da5de20118d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004519308s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-308593 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-308593 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-308593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-308593 -n no-preload-308593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-308593 -n no-preload-308593: exit status 2 (343.884432ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-308593 -n no-preload-308593
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-308593 -n no-preload-308593: exit status 2 (337.556898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-308593 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-308593 -n no-preload-308593
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-308593 -n no-preload-308593
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-126148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0316 17:47:48.671682  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-126148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m5.791027948s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pcqp6" [b6175db8-f467-4622-b51f-6c3a0b9dff8e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004889973s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pcqp6" [b6175db8-f467-4622-b51f-6c3a0b9dff8e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003939177s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-746380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-746380 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-746380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-746380 -n old-k8s-version-746380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-746380 -n old-k8s-version-746380: exit status 2 (385.347287ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-746380 -n old-k8s-version-746380
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-746380 -n old-k8s-version-746380: exit status 2 (357.033984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-746380 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-746380 -n old-k8s-version-746380
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-746380 -n old-k8s-version-746380
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-685848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-685848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m2.131289233s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (62.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-126148 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [500b2562-2947-4d79-a095-5b6ea826cbb2] Pending
helpers_test.go:344: "busybox" [500b2562-2947-4d79-a095-5b6ea826cbb2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [500b2562-2947-4d79-a095-5b6ea826cbb2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003362778s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-126148 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-126148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-126148 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.598270113s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-126148 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-126148 --alsologtostderr -v=3
E0316 17:49:08.211634  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-126148 --alsologtostderr -v=3: (12.287880396s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-126148 -n embed-certs-126148
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-126148 -n embed-certs-126148: exit status 7 (80.480853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-126148 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-126148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-126148 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m27.363860989s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-126148 -n embed-certs-126148
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-685848 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9ef8860-8464-492a-8402-87b6858d1acc] Pending
helpers_test.go:344: "busybox" [f9ef8860-8464-492a-8402-87b6858d1acc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9ef8860-8464-492a-8402-87b6858d1acc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003294923s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-685848 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-685848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-685848 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.41526659s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-685848 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-685848 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-685848 --alsologtostderr -v=3: (12.239997815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848: exit status 7 (97.454087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-685848 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-685848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0316 17:51:35.880279  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:35.886070  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:35.896413  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:35.916741  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:35.957076  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:36.037359  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:36.198239  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:36.518742  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:37.158932  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:38.439155  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:40.999383  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:46.119542  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:51:56.359780  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:52:16.839995  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:52:31.721879  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:52:36.952840  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:36.958189  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:36.968552  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:36.988853  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:37.029379  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:37.109739  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:37.270678  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:37.591166  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:38.232291  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:39.512810  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:42.073094  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:47.193447  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:48.671712  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
E0316 17:52:57.434547  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
E0316 17:52:57.801070  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 17:53:17.914896  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-685848 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m26.951951832s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sqv2" [6b8a1bdf-1c1e-4827-ae3a-19b5dcb66e51] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004010435s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7sqv2" [6b8a1bdf-1c1e-4827-ae3a-19b5dcb66e51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003863688s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-126148 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-126148 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-126148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-126148 -n embed-certs-126148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-126148 -n embed-certs-126148: exit status 2 (323.204743ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-126148 -n embed-certs-126148
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-126148 -n embed-certs-126148: exit status 2 (327.24268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-126148 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-126148 -n embed-certs-126148
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-126148 -n embed-certs-126148
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-609529 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0316 17:54:08.211218  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:54:19.721234  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-609529 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (49.002652975s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jt8t2" [49ab1ae6-10ac-437c-a2d5-000481c02bc5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004019395s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jt8t2" [49ab1ae6-10ac-437c-a2d5-000481c02bc5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004381048s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-685848 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-685848 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-685848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848: exit status 2 (352.384443ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848: exit status 2 (384.863162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-685848 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-685848 -n default-k8s-diff-port-685848
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m7.383154219s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-609529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-609529 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.753980277s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-609529 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-609529 --alsologtostderr -v=3: (1.687285959s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-609529 -n newest-cni-609529
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-609529 -n newest-cni-609529: exit status 7 (94.587167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-609529 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-609529 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-609529 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (24.201330814s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-609529 -n newest-cni-609529
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-609529 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-609529 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-609529 -n newest-cni-609529
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-609529 -n newest-cni-609529: exit status 2 (378.201944ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-609529 -n newest-cni-609529
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-609529 -n newest-cni-609529: exit status 2 (495.05306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-609529 --alsologtostderr -v=1
E0316 17:55:20.795795  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-609529 --alsologtostderr -v=1: (1.142625759s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-609529 -n newest-cni-609529
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-609529 -n newest-cni-609529
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.94s)
E0316 18:00:52.030042  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.035301  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.045546  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.065813  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.106124  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.186394  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.346765  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:52.667341  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:53.307576  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:54.588409  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:00:55.843944  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 18:00:57.149328  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:01:02.269951  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:01:12.510160  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:01:31.881703  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:31.887042  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:31.897389  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:31.917657  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:31.957916  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:32.038392  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:32.198737  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:32.519366  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:32.991007  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/auto-004165/client.crt: no such file or directory
E0316 18:01:33.160435  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:34.440702  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:35.879972  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
E0316 18:01:37.002782  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
E0316 18:01:42.123671  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m6.832536352s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-788hl" [7c269d6e-2118-4682-b831-04b600145bb2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-788hl" [7c269d6e-2118-4682-b831-04b600145bb2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004722657s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m14.563958971s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wmhmk" [05df2ef4-1ed3-429d-aea4-236c4c75cc00] Running
E0316 17:56:35.879792  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/old-k8s-version-746380/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005634971s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4frbq" [1ba5c966-6c50-4636-94d2-d4bfbb9e254f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4frbq" [1ba5c966-6c50-4636-94d2-d4bfbb9e254f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004537358s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0316 17:57:36.953061  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/no-preload-308593/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m4.399367393s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z7jff" [b100f09a-0a32-4661-8700-e8b26e23145d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004911096s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zxgtw" [87861209-cd9c-4be9-8252-3c2506f32943] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0316 17:57:48.672231  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/addons-821353/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zxgtw" [87861209-cd9c-4be9-8252-3c2506f32943] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003794902s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g4xxw" [3e88ad2a-05cd-46a0-8e1b-9e738dcb170c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g4xxw" [3e88ad2a-05cd-46a0-8e1b-9e738dcb170c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004006102s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m24.368616603s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0316 17:59:08.210744  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/functional-193375/client.crt: no such file or directory
E0316 17:59:33.920580  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:33.925926  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:33.936182  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:33.956494  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:33.996813  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:34.077189  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:34.238029  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:34.558262  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:35.198869  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:36.479683  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:39.039881  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
E0316 17:59:44.160754  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m2.149669075s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5dpwf" [14424f7c-3ffd-499f-8ce0-fb619c8c316c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5dpwf" [14424f7c-3ffd-499f-8ce0-fb619c8c316c] Running
E0316 17:59:54.401753  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/default-k8s-diff-port-685848/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004073634s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x9f2h" [74190cf3-1ae0-49d4-a11f-046b2f1b0c77] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004423289s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-77qdg" [874e0f38-e243-4459-9640-a10f9700c743] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-77qdg" [874e0f38-e243-4459-9640-a10f9700c743] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005053048s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-004165 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m26.819038133s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-004165 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-004165 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s4hks" [2e68b916-b78d-46a2-80e7-513e5d192382] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s4hks" [2e68b916-b78d-46a2-80e7-513e5d192382] Running
E0316 18:01:52.364369  285633 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/kindnet-004165/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004092541s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-004165 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-004165 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-723066 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-723066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-723066
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-986403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-986403
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-004165 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-004165

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-004165"

                                                
                                                
----------------------- debugLogs end: kubenet-004165 [took: 5.328869648s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-004165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-004165
--- SKIP: TestNetworkPlugins/group/kubenet (5.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-004165 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-004165" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18277-280225/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 16 Mar 2024 17:37:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-641679
contexts:
- context:
cluster: force-systemd-flag-641679
extensions:
- extension:
last-update: Sat, 16 Mar 2024 17:37:19 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-641679
name: force-systemd-flag-641679
current-context: force-systemd-flag-641679
kind: Config
preferences: {}
users:
- name: force-systemd-flag-641679
user:
client-certificate: /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/force-systemd-flag-641679/client.crt
client-key: /home/jenkins/minikube-integration/18277-280225/.minikube/profiles/force-systemd-flag-641679/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-004165

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-004165" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-004165"

                                                
                                                
----------------------- debugLogs end: cilium-004165 [took: 5.832317649s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-004165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-004165
--- SKIP: TestNetworkPlugins/group/cilium (6.06s)

                                                
                                    
Copied to clipboard