Test Report: Docker_Linux_containerd_arm64 18375

                    
                      71179286cc00ab66370748dfc329f8d30a1d24a0:2024-03-14:33556
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (38.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-122411 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-122411 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-122411 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3740aa35-79da-4d05-8871-d98638231bbe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3740aa35-79da-4d05-8871-d98638231bbe] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003339652s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-122411 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.06859893s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-122411 addons disable ingress-dns --alsologtostderr -v=1: (1.550676035s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-122411 addons disable ingress --alsologtostderr -v=1: (7.802419724s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-122411
helpers_test.go:235: (dbg) docker inspect addons-122411:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1",
	        "Created": "2024-03-14T00:21:03.230132862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1965180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-14T00:21:03.555504645Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1/hostname",
	        "HostsPath": "/var/lib/docker/containers/176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1/hosts",
	        "LogPath": "/var/lib/docker/containers/176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1/176ef05a369c7f222c83d39fff69434c357b6987b0bb949646512becd40d2cf1-json.log",
	        "Name": "/addons-122411",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-122411:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-122411",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c8bdf5a27efba5a4997d4700b0518572d89c6d5a44e34063524bfeaf79ca3d07-init/diff:/var/lib/docker/overlay2/72e8565c3c6c9dcaff9dab92d595dc2eb0a265ce93caf6066e88703bac9975f6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c8bdf5a27efba5a4997d4700b0518572d89c6d5a44e34063524bfeaf79ca3d07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c8bdf5a27efba5a4997d4700b0518572d89c6d5a44e34063524bfeaf79ca3d07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c8bdf5a27efba5a4997d4700b0518572d89c6d5a44e34063524bfeaf79ca3d07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-122411",
	                "Source": "/var/lib/docker/volumes/addons-122411/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-122411",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-122411",
	                "name.minikube.sigs.k8s.io": "addons-122411",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d049a38919a06ce13ed0e4988e9299910a2fee7b311a9c0e1c0ff968b7a7b098",
	            "SandboxKey": "/var/run/docker/netns/d049a38919a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35041"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35040"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35039"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35038"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-122411": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "176ef05a369c",
	                        "addons-122411"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "63eedb99593175e728ef3b639658b3fb3d5e886ced134b22ec30ab80d963032e",
	                    "EndpointID": "5ffa29c93b7a54bb69294aa72065945930797a3c143433b026dc3bd00c5b9f61",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-122411",
	                        "176ef05a369c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-122411 -n addons-122411
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-122411 logs -n 25: (2.117984149s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-540583   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-540583              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-540583              | download-only-540583   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | -o=json --download-only              | download-only-541047   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-541047              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-541047              | download-only-541047   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-455584              | download-only-455584   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-540583              | download-only-540583   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-541047              | download-only-541047   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | --download-only -p                   | download-docker-976036 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | download-docker-976036               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-976036            | download-docker-976036 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | --download-only -p                   | binary-mirror-069067   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | binary-mirror-069067                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39983               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-069067              | binary-mirror-069067   | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| addons  | enable dashboard -p                  | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | addons-122411                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | addons-122411                        |                        |         |         |                     |                     |
	| start   | -p addons-122411 --wait=true         | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:22 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-122411 ip                     | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:22 UTC | 14 Mar 24 00:22 UTC |
	| addons  | addons-122411 addons disable         | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:22 UTC | 14 Mar 24 00:22 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-122411 addons                 | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	|         | addons-122411                        |                        |         |         |                     |                     |
	| ssh     | addons-122411 ssh curl -s            | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-122411 ip                     | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	| addons  | addons-122411 addons disable         | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-122411 addons disable         | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC | 14 Mar 24 00:23 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-122411 addons                 | addons-122411          | jenkins | v1.32.0 | 14 Mar 24 00:23 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:20:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:20:39.439654 1964718 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:20:39.439838 1964718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:39.439850 1964718 out.go:304] Setting ErrFile to fd 2...
	I0314 00:20:39.439855 1964718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:39.440131 1964718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:20:39.440654 1964718 out.go:298] Setting JSON to false
	I0314 00:20:39.441556 1964718 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28990,"bootTime":1710346650,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:20:39.441629 1964718 start.go:139] virtualization:  
	I0314 00:20:39.443812 1964718 out.go:177] * [addons-122411] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 00:20:39.445681 1964718 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:20:39.447507 1964718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:20:39.445860 1964718 notify.go:220] Checking for updates...
	I0314 00:20:39.451532 1964718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:20:39.453725 1964718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:20:39.455904 1964718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 00:20:39.457667 1964718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:20:39.459775 1964718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:20:39.480126 1964718 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:20:39.480256 1964718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:39.544145 1964718 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:39.535313445 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:39.544249 1964718 docker.go:295] overlay module found
	I0314 00:20:39.546120 1964718 out.go:177] * Using the docker driver based on user configuration
	I0314 00:20:39.547820 1964718 start.go:297] selected driver: docker
	I0314 00:20:39.547837 1964718 start.go:901] validating driver "docker" against <nil>
	I0314 00:20:39.547849 1964718 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:20:39.548497 1964718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:39.601302 1964718 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:39.592656374 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:39.601482 1964718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:20:39.601717 1964718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:20:39.603526 1964718 out.go:177] * Using Docker driver with root privileges
	I0314 00:20:39.605681 1964718 cni.go:84] Creating CNI manager for ""
	I0314 00:20:39.605701 1964718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:20:39.605711 1964718 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 00:20:39.605815 1964718 start.go:340] cluster config:
	{Name:addons-122411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-122411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:20:39.608739 1964718 out.go:177] * Starting "addons-122411" primary control-plane node in "addons-122411" cluster
	I0314 00:20:39.610383 1964718 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 00:20:39.612431 1964718 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 00:20:39.614095 1964718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:20:39.614126 1964718 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 00:20:39.614149 1964718 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:39.614158 1964718 cache.go:56] Caching tarball of preloaded images
	I0314 00:20:39.614257 1964718 preload.go:173] Found /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 00:20:39.614268 1964718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0314 00:20:39.614617 1964718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/config.json ...
	I0314 00:20:39.614640 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/config.json: {Name:mk02168d44307eb6068e4761362f457ab1ba607d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:20:39.630213 1964718 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 00:20:39.630348 1964718 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 00:20:39.630371 1964718 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 00:20:39.630377 1964718 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 00:20:39.630385 1964718 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 00:20:39.630398 1964718 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from local cache
	I0314 00:20:55.851630 1964718 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from cached tarball
	I0314 00:20:55.851671 1964718 cache.go:194] Successfully downloaded all kic artifacts
	I0314 00:20:55.851714 1964718 start.go:360] acquireMachinesLock for addons-122411: {Name:mk319905a0730f62c640d4a155e69f9369879bc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 00:20:55.851864 1964718 start.go:364] duration metric: took 124.98µs to acquireMachinesLock for "addons-122411"
	I0314 00:20:55.851899 1964718 start.go:93] Provisioning new machine with config: &{Name:addons-122411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-122411 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 00:20:55.851981 1964718 start.go:125] createHost starting for "" (driver="docker")
	I0314 00:20:55.854177 1964718 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0314 00:20:55.854419 1964718 start.go:159] libmachine.API.Create for "addons-122411" (driver="docker")
	I0314 00:20:55.854455 1964718 client.go:168] LocalClient.Create starting
	I0314 00:20:55.854572 1964718 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem
	I0314 00:20:56.269665 1964718 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem
	I0314 00:20:56.489561 1964718 cli_runner.go:164] Run: docker network inspect addons-122411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0314 00:20:56.504976 1964718 cli_runner.go:211] docker network inspect addons-122411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0314 00:20:56.505076 1964718 network_create.go:281] running [docker network inspect addons-122411] to gather additional debugging logs...
	I0314 00:20:56.505097 1964718 cli_runner.go:164] Run: docker network inspect addons-122411
	W0314 00:20:56.519684 1964718 cli_runner.go:211] docker network inspect addons-122411 returned with exit code 1
	I0314 00:20:56.519718 1964718 network_create.go:284] error running [docker network inspect addons-122411]: docker network inspect addons-122411: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-122411 not found
	I0314 00:20:56.519734 1964718 network_create.go:286] output of [docker network inspect addons-122411]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-122411 not found
	
	** /stderr **
	I0314 00:20:56.519829 1964718 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 00:20:56.536914 1964718 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400256bff0}
	I0314 00:20:56.536950 1964718 network_create.go:124] attempt to create docker network addons-122411 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0314 00:20:56.537009 1964718 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-122411 addons-122411
	I0314 00:20:56.598219 1964718 network_create.go:108] docker network addons-122411 192.168.49.0/24 created
	I0314 00:20:56.598252 1964718 kic.go:121] calculated static IP "192.168.49.2" for the "addons-122411" container
	I0314 00:20:56.598344 1964718 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0314 00:20:56.612420 1964718 cli_runner.go:164] Run: docker volume create addons-122411 --label name.minikube.sigs.k8s.io=addons-122411 --label created_by.minikube.sigs.k8s.io=true
	I0314 00:20:56.628706 1964718 oci.go:103] Successfully created a docker volume addons-122411
	I0314 00:20:56.628787 1964718 cli_runner.go:164] Run: docker run --rm --name addons-122411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-122411 --entrypoint /usr/bin/test -v addons-122411:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0314 00:20:58.665986 1964718 cli_runner.go:217] Completed: docker run --rm --name addons-122411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-122411 --entrypoint /usr/bin/test -v addons-122411:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib: (2.037153702s)
	I0314 00:20:58.666023 1964718 oci.go:107] Successfully prepared a docker volume addons-122411
	I0314 00:20:58.666058 1964718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:20:58.666079 1964718 kic.go:194] Starting extracting preloaded images to volume ...
	I0314 00:20:58.666152 1964718 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-122411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0314 00:21:03.161685 1964718 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-122411:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (4.495493894s)
	I0314 00:21:03.161719 1964718 kic.go:203] duration metric: took 4.495635866s to extract preloaded images to volume ...
	W0314 00:21:03.161861 1964718 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0314 00:21:03.161986 1964718 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0314 00:21:03.212688 1964718 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-122411 --name addons-122411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-122411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-122411 --network addons-122411 --ip 192.168.49.2 --volume addons-122411:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0314 00:21:03.566159 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Running}}
	I0314 00:21:03.587361 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:03.614924 1964718 cli_runner.go:164] Run: docker exec addons-122411 stat /var/lib/dpkg/alternatives/iptables
	I0314 00:21:03.669549 1964718 oci.go:144] the created container "addons-122411" has a running status.
	I0314 00:21:03.669577 1964718 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa...
	I0314 00:21:04.370991 1964718 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0314 00:21:04.406523 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:04.428788 1964718 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0314 00:21:04.428808 1964718 kic_runner.go:114] Args: [docker exec --privileged addons-122411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0314 00:21:04.490950 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:04.509667 1964718 machine.go:94] provisionDockerMachine start ...
	I0314 00:21:04.509758 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:04.529765 1964718 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:04.530013 1964718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35041 <nil> <nil>}
	I0314 00:21:04.530022 1964718 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 00:21:04.670868 1964718 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-122411
	
	I0314 00:21:04.670895 1964718 ubuntu.go:169] provisioning hostname "addons-122411"
	I0314 00:21:04.670966 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:04.691129 1964718 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:04.691391 1964718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35041 <nil> <nil>}
	I0314 00:21:04.691403 1964718 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-122411 && echo "addons-122411" | sudo tee /etc/hostname
	I0314 00:21:04.846854 1964718 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-122411
	
	I0314 00:21:04.846944 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:04.862893 1964718 main.go:141] libmachine: Using SSH client type: native
	I0314 00:21:04.863173 1964718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35041 <nil> <nil>}
	I0314 00:21:04.863195 1964718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-122411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-122411/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-122411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 00:21:05.016038 1964718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 00:21:05.016117 1964718 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18375-1958430/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-1958430/.minikube}
	I0314 00:21:05.016156 1964718 ubuntu.go:177] setting up certificates
	I0314 00:21:05.016168 1964718 provision.go:84] configureAuth start
	I0314 00:21:05.016238 1964718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-122411
	I0314 00:21:05.033590 1964718 provision.go:143] copyHostCerts
	I0314 00:21:05.033688 1964718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem (1123 bytes)
	I0314 00:21:05.033854 1964718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem (1675 bytes)
	I0314 00:21:05.033926 1964718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem (1078 bytes)
	I0314 00:21:05.033991 1964718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem org=jenkins.addons-122411 san=[127.0.0.1 192.168.49.2 addons-122411 localhost minikube]
	I0314 00:21:05.430048 1964718 provision.go:177] copyRemoteCerts
	I0314 00:21:05.430122 1964718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 00:21:05.430162 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:05.449939 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:05.548303 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 00:21:05.572615 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 00:21:05.596350 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 00:21:05.621503 1964718 provision.go:87] duration metric: took 605.321349ms to configureAuth
	I0314 00:21:05.621532 1964718 ubuntu.go:193] setting minikube options for container-runtime
	I0314 00:21:05.621722 1964718 config.go:182] Loaded profile config "addons-122411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:21:05.621744 1964718 machine.go:97] duration metric: took 1.112052046s to provisionDockerMachine
	I0314 00:21:05.621751 1964718 client.go:171] duration metric: took 9.767287281s to LocalClient.Create
	I0314 00:21:05.621774 1964718 start.go:167] duration metric: took 9.767356204s to libmachine.API.Create "addons-122411"
	I0314 00:21:05.621785 1964718 start.go:293] postStartSetup for "addons-122411" (driver="docker")
	I0314 00:21:05.621795 1964718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 00:21:05.621858 1964718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 00:21:05.621911 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:05.638041 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:05.740553 1964718 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 00:21:05.743792 1964718 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 00:21:05.743828 1964718 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 00:21:05.743858 1964718 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 00:21:05.743867 1964718 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 00:21:05.743877 1964718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/addons for local assets ...
	I0314 00:21:05.743954 1964718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/files for local assets ...
	I0314 00:21:05.743985 1964718 start.go:296] duration metric: took 122.194455ms for postStartSetup
	I0314 00:21:05.744299 1964718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-122411
	I0314 00:21:05.759833 1964718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/config.json ...
	I0314 00:21:05.760130 1964718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:21:05.760195 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:05.775611 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:05.871856 1964718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 00:21:05.876211 1964718 start.go:128] duration metric: took 10.024212959s to createHost
	I0314 00:21:05.876239 1964718 start.go:83] releasing machines lock for "addons-122411", held for 10.024356261s
	I0314 00:21:05.876334 1964718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-122411
	I0314 00:21:05.892401 1964718 ssh_runner.go:195] Run: cat /version.json
	I0314 00:21:05.892426 1964718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 00:21:05.892460 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:05.892509 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:05.913974 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:05.915169 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:06.125947 1964718 ssh_runner.go:195] Run: systemctl --version
	I0314 00:21:06.130672 1964718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 00:21:06.135129 1964718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 00:21:06.161302 1964718 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 00:21:06.161381 1964718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 00:21:06.191318 1964718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0314 00:21:06.191340 1964718 start.go:494] detecting cgroup driver to use...
	I0314 00:21:06.191406 1964718 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 00:21:06.191481 1964718 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 00:21:06.204090 1964718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 00:21:06.215808 1964718 docker.go:217] disabling cri-docker service (if available) ...
	I0314 00:21:06.215895 1964718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 00:21:06.229770 1964718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 00:21:06.245484 1964718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 00:21:06.330138 1964718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 00:21:06.421801 1964718 docker.go:233] disabling docker service ...
	I0314 00:21:06.421879 1964718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 00:21:06.441280 1964718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 00:21:06.453178 1964718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 00:21:06.534792 1964718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 00:21:06.625563 1964718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 00:21:06.636627 1964718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 00:21:06.653295 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 00:21:06.664274 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 00:21:06.674861 1964718 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 00:21:06.674969 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 00:21:06.685664 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 00:21:06.696113 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 00:21:06.705870 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 00:21:06.716431 1964718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 00:21:06.725348 1964718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 00:21:06.734957 1964718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 00:21:06.743627 1964718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 00:21:06.752242 1964718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:21:06.844979 1964718 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 00:21:06.974874 1964718 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0314 00:21:06.975010 1964718 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0314 00:21:06.978629 1964718 start.go:562] Will wait 60s for crictl version
	I0314 00:21:06.978724 1964718 ssh_runner.go:195] Run: which crictl
	I0314 00:21:06.982128 1964718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 00:21:07.023598 1964718 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0314 00:21:07.023718 1964718 ssh_runner.go:195] Run: containerd --version
	I0314 00:21:07.047573 1964718 ssh_runner.go:195] Run: containerd --version
	I0314 00:21:07.072566 1964718 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0314 00:21:07.074666 1964718 cli_runner.go:164] Run: docker network inspect addons-122411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 00:21:07.089734 1964718 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0314 00:21:07.093528 1964718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:21:07.104622 1964718 kubeadm.go:877] updating cluster {Name:addons-122411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-122411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 00:21:07.104748 1964718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:21:07.104819 1964718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:21:07.146089 1964718 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 00:21:07.146115 1964718 containerd.go:519] Images already preloaded, skipping extraction
	I0314 00:21:07.146206 1964718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 00:21:07.182275 1964718 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 00:21:07.182298 1964718 cache_images.go:84] Images are preloaded, skipping loading
	I0314 00:21:07.182306 1964718 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0314 00:21:07.182412 1964718 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-122411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-122411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 00:21:07.182486 1964718 ssh_runner.go:195] Run: sudo crictl info
	I0314 00:21:07.218396 1964718 cni.go:84] Creating CNI manager for ""
	I0314 00:21:07.218423 1964718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:21:07.218432 1964718 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 00:21:07.218486 1964718 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-122411 NodeName:addons-122411 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 00:21:07.218657 1964718 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-122411"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 00:21:07.218738 1964718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 00:21:07.227391 1964718 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 00:21:07.227467 1964718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 00:21:07.236041 1964718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0314 00:21:07.255161 1964718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 00:21:07.273098 1964718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0314 00:21:07.291409 1964718 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0314 00:21:07.295167 1964718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 00:21:07.306057 1964718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:21:07.395887 1964718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:21:07.416191 1964718 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411 for IP: 192.168.49.2
	I0314 00:21:07.416266 1964718 certs.go:194] generating shared ca certs ...
	I0314 00:21:07.416296 1964718 certs.go:226] acquiring lock for ca certs: {Name:mka77573162012513ec65b9398fcff30bed9742a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:07.416851 1964718 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key
	I0314 00:21:07.913061 1964718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt ...
	I0314 00:21:07.913091 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt: {Name:mk9617ca7b421c9e0f4ae545b30ab46dcf93e5e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:07.913319 1964718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key ...
	I0314 00:21:07.913334 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key: {Name:mk728dfa98dba73d9d122892bf9cd7dba060e6cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:07.913493 1964718 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key
	I0314 00:21:08.834109 1964718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.crt ...
	I0314 00:21:08.834141 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.crt: {Name:mk711ffb2c559c41ba96100f40957a791aad228b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:08.834335 1964718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key ...
	I0314 00:21:08.834348 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key: {Name:mk36e88d5ffb4c3e3fbed3cae42396a7e18897e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:08.834429 1964718 certs.go:256] generating profile certs ...
	I0314 00:21:08.834490 1964718 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.key
	I0314 00:21:08.834506 1964718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt with IP's: []
	I0314 00:21:09.214268 1964718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt ...
	I0314 00:21:09.214300 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: {Name:mk90963949c35f323f6ba09bc2a6fe76b783f003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:09.214593 1964718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.key ...
	I0314 00:21:09.214613 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.key: {Name:mkb343a069b8a709c63a2f0ad3c91a7c54502c85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:09.214729 1964718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key.d086d0a7
	I0314 00:21:09.214757 1964718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt.d086d0a7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0314 00:21:09.771877 1964718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt.d086d0a7 ...
	I0314 00:21:09.771948 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt.d086d0a7: {Name:mk4163dc53db6e61bde0a2aaeebc0a5764adc32b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:09.772757 1964718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key.d086d0a7 ...
	I0314 00:21:09.772774 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key.d086d0a7: {Name:mk878e782e54fe6cd1cfa78e83f9587e99ec19bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:09.773353 1964718 certs.go:381] copying /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt.d086d0a7 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt
	I0314 00:21:09.773479 1964718 certs.go:385] copying /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key.d086d0a7 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key
	I0314 00:21:09.773537 1964718 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.key
	I0314 00:21:09.773559 1964718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.crt with IP's: []
	I0314 00:21:10.169281 1964718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.crt ...
	I0314 00:21:10.169316 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.crt: {Name:mk7e508824249aa5b49fe9f3ccafee17d9a555e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:10.169510 1964718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.key ...
	I0314 00:21:10.169525 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.key: {Name:mk040fc6bc0f7930ba905b638829c84a9072ebfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:10.170098 1964718 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 00:21:10.170144 1964718 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem (1078 bytes)
	I0314 00:21:10.170182 1964718 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem (1123 bytes)
	I0314 00:21:10.170210 1964718 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem (1675 bytes)
	I0314 00:21:10.170880 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 00:21:10.197460 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 00:21:10.224075 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 00:21:10.249068 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 00:21:10.274323 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 00:21:10.298588 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 00:21:10.323086 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 00:21:10.347064 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 00:21:10.370965 1964718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 00:21:10.395321 1964718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 00:21:10.413782 1964718 ssh_runner.go:195] Run: openssl version
	I0314 00:21:10.419651 1964718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 00:21:10.429050 1964718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:21:10.432417 1964718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:21:10.432490 1964718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 00:21:10.439479 1964718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 00:21:10.448722 1964718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 00:21:10.451925 1964718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 00:21:10.451998 1964718 kubeadm.go:391] StartCluster: {Name:addons-122411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-122411 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:21:10.452086 1964718 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0314 00:21:10.452168 1964718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 00:21:10.493696 1964718 cri.go:89] found id: ""
	I0314 00:21:10.493778 1964718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 00:21:10.506260 1964718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 00:21:10.515912 1964718 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0314 00:21:10.515992 1964718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 00:21:10.528049 1964718 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 00:21:10.528070 1964718 kubeadm.go:156] found existing configuration files:
	
	I0314 00:21:10.528127 1964718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 00:21:10.538230 1964718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 00:21:10.538316 1964718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 00:21:10.547526 1964718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 00:21:10.557785 1964718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 00:21:10.557869 1964718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 00:21:10.566557 1964718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 00:21:10.575893 1964718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 00:21:10.575957 1964718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 00:21:10.585058 1964718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 00:21:10.594178 1964718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 00:21:10.594253 1964718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 00:21:10.602962 1964718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0314 00:21:10.648826 1964718 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 00:21:10.649039 1964718 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 00:21:10.689230 1964718 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0314 00:21:10.689305 1964718 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0314 00:21:10.689343 1964718 kubeadm.go:309] OS: Linux
	I0314 00:21:10.689392 1964718 kubeadm.go:309] CGROUPS_CPU: enabled
	I0314 00:21:10.689460 1964718 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0314 00:21:10.689508 1964718 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0314 00:21:10.689558 1964718 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0314 00:21:10.689608 1964718 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0314 00:21:10.689658 1964718 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0314 00:21:10.689706 1964718 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0314 00:21:10.689759 1964718 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0314 00:21:10.689807 1964718 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0314 00:21:10.765275 1964718 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 00:21:10.765434 1964718 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 00:21:10.765542 1964718 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 00:21:10.986714 1964718 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 00:21:10.990549 1964718 out.go:204]   - Generating certificates and keys ...
	I0314 00:21:10.990721 1964718 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 00:21:10.990835 1964718 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 00:21:11.603556 1964718 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 00:21:11.874177 1964718 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 00:21:12.373310 1964718 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 00:21:13.180291 1964718 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 00:21:13.462152 1964718 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 00:21:13.462449 1964718 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-122411 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 00:21:13.862723 1964718 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 00:21:13.863034 1964718 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-122411 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 00:21:14.748039 1964718 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 00:21:15.581971 1964718 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 00:21:15.850198 1964718 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 00:21:15.850481 1964718 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 00:21:16.039583 1964718 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 00:21:16.590513 1964718 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 00:21:16.912211 1964718 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 00:21:17.184871 1964718 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 00:21:17.185486 1964718 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 00:21:17.188252 1964718 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 00:21:17.191015 1964718 out.go:204]   - Booting up control plane ...
	I0314 00:21:17.191118 1964718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 00:21:17.191193 1964718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 00:21:17.193476 1964718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 00:21:17.208928 1964718 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 00:21:17.209024 1964718 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 00:21:17.209063 1964718 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 00:21:17.315697 1964718 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 00:21:23.810193 1964718 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502110 seconds
	I0314 00:21:23.810314 1964718 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 00:21:23.825572 1964718 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 00:21:24.351174 1964718 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 00:21:24.351388 1964718 kubeadm.go:309] [mark-control-plane] Marking the node addons-122411 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 00:21:24.863650 1964718 kubeadm.go:309] [bootstrap-token] Using token: d0efvz.p5fbzk0vpfz99a1c
	I0314 00:21:24.865747 1964718 out.go:204]   - Configuring RBAC rules ...
	I0314 00:21:24.865884 1964718 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 00:21:24.875024 1964718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 00:21:24.883188 1964718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 00:21:24.887327 1964718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 00:21:24.891491 1964718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 00:21:24.895778 1964718 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 00:21:24.909632 1964718 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 00:21:25.137952 1964718 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 00:21:25.283878 1964718 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 00:21:25.286110 1964718 kubeadm.go:309] 
	I0314 00:21:25.286188 1964718 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 00:21:25.286199 1964718 kubeadm.go:309] 
	I0314 00:21:25.286274 1964718 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 00:21:25.286283 1964718 kubeadm.go:309] 
	I0314 00:21:25.286307 1964718 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 00:21:25.286368 1964718 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 00:21:25.286420 1964718 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 00:21:25.286428 1964718 kubeadm.go:309] 
	I0314 00:21:25.286480 1964718 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 00:21:25.286488 1964718 kubeadm.go:309] 
	I0314 00:21:25.286551 1964718 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 00:21:25.286558 1964718 kubeadm.go:309] 
	I0314 00:21:25.286609 1964718 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 00:21:25.286685 1964718 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 00:21:25.286754 1964718 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 00:21:25.286763 1964718 kubeadm.go:309] 
	I0314 00:21:25.286844 1964718 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 00:21:25.286921 1964718 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 00:21:25.286929 1964718 kubeadm.go:309] 
	I0314 00:21:25.287009 1964718 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token d0efvz.p5fbzk0vpfz99a1c \
	I0314 00:21:25.287111 1964718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:644f2669e4ceaaa79da07ee7b0c25bc89ffedad1f907d006eecbfc00d6f5ae7a \
	I0314 00:21:25.287135 1964718 kubeadm.go:309] 	--control-plane 
	I0314 00:21:25.287139 1964718 kubeadm.go:309] 
	I0314 00:21:25.287233 1964718 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 00:21:25.287238 1964718 kubeadm.go:309] 
	I0314 00:21:25.287316 1964718 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token d0efvz.p5fbzk0vpfz99a1c \
	I0314 00:21:25.287413 1964718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:644f2669e4ceaaa79da07ee7b0c25bc89ffedad1f907d006eecbfc00d6f5ae7a 
	I0314 00:21:25.289868 1964718 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0314 00:21:25.289990 1964718 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 00:21:25.290067 1964718 cni.go:84] Creating CNI manager for ""
	I0314 00:21:25.290101 1964718 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:21:25.292854 1964718 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 00:21:25.295125 1964718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 00:21:25.299678 1964718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 00:21:25.299697 1964718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 00:21:25.318981 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 00:21:26.340677 1964718 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.021658137s)
	I0314 00:21:26.340716 1964718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 00:21:26.340833 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:26.340925 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-122411 minikube.k8s.io/updated_at=2024_03_14T00_21_26_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe minikube.k8s.io/name=addons-122411 minikube.k8s.io/primary=true
	I0314 00:21:26.375914 1964718 ops.go:34] apiserver oom_adj: -16
	I0314 00:21:26.518772 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:27.019559 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:27.519438 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:28.019667 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:28.518931 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:29.018949 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:29.519743 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:30.020352 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:30.519777 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:31.019257 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:31.519468 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:32.018909 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:32.519638 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:33.019538 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:33.518963 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:34.019251 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:34.519188 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:35.019382 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:35.519700 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:36.018929 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:36.519071 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:37.018935 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:37.519662 1964718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 00:21:37.607266 1964718 kubeadm.go:1106] duration metric: took 11.266477993s to wait for elevateKubeSystemPrivileges
	W0314 00:21:37.607301 1964718 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 00:21:37.607309 1964718 kubeadm.go:393] duration metric: took 27.155341393s to StartCluster
	I0314 00:21:37.607324 1964718 settings.go:142] acquiring lock: {Name:mkb041dc79ae1947b27d39dd7ebbd3bd473ee07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:37.607437 1964718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:21:37.607829 1964718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/kubeconfig: {Name:mkdddca847fdd161b32ac7434f6b37d491dbdecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:21:37.608508 1964718 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 00:21:37.610920 1964718 out.go:177] * Verifying Kubernetes components...
	I0314 00:21:37.608643 1964718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 00:21:37.608824 1964718 config.go:182] Loaded profile config "addons-122411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:21:37.608833 1964718 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0314 00:21:37.612996 1964718 addons.go:69] Setting yakd=true in profile "addons-122411"
	I0314 00:21:37.613023 1964718 addons.go:234] Setting addon yakd=true in "addons-122411"
	I0314 00:21:37.613061 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.613587 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.613656 1964718 addons.go:69] Setting ingress=true in profile "addons-122411"
	I0314 00:21:37.613687 1964718 addons.go:234] Setting addon ingress=true in "addons-122411"
	I0314 00:21:37.613721 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.614104 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.614592 1964718 addons.go:69] Setting ingress-dns=true in profile "addons-122411"
	I0314 00:21:37.614633 1964718 addons.go:234] Setting addon ingress-dns=true in "addons-122411"
	I0314 00:21:37.614668 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.614724 1964718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 00:21:37.614941 1964718 addons.go:69] Setting cloud-spanner=true in profile "addons-122411"
	I0314 00:21:37.615010 1964718 addons.go:234] Setting addon cloud-spanner=true in "addons-122411"
	I0314 00:21:37.615052 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.615173 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.615607 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.618290 1964718 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-122411"
	I0314 00:21:37.618370 1964718 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-122411"
	I0314 00:21:37.618403 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.618819 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.622967 1964718 addons.go:69] Setting default-storageclass=true in profile "addons-122411"
	I0314 00:21:37.623019 1964718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-122411"
	I0314 00:21:37.623372 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.623597 1964718 addons.go:69] Setting inspektor-gadget=true in profile "addons-122411"
	I0314 00:21:37.623678 1964718 addons.go:234] Setting addon inspektor-gadget=true in "addons-122411"
	I0314 00:21:37.623749 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.627451 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.659735 1964718 addons.go:69] Setting gcp-auth=true in profile "addons-122411"
	I0314 00:21:37.659858 1964718 mustload.go:65] Loading cluster: addons-122411
	I0314 00:21:37.660079 1964718 config.go:182] Loaded profile config "addons-122411": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:21:37.663691 1964718 addons.go:69] Setting metrics-server=true in profile "addons-122411"
	I0314 00:21:37.663866 1964718 addons.go:234] Setting addon metrics-server=true in "addons-122411"
	I0314 00:21:37.663932 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.665172 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.672744 1964718 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0314 00:21:37.674995 1964718 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 00:21:37.677125 1964718 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 00:21:37.682188 1964718 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 00:21:37.682213 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0314 00:21:37.682285 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.663759 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.702049 1964718 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0314 00:21:37.704332 1964718 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0314 00:21:37.704356 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0314 00:21:37.704424 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.714220 1964718 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-122411"
	I0314 00:21:37.714275 1964718 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-122411"
	I0314 00:21:37.714320 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.714793 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.727434 1964718 addons.go:69] Setting registry=true in profile "addons-122411"
	I0314 00:21:37.727478 1964718 addons.go:234] Setting addon registry=true in "addons-122411"
	I0314 00:21:37.727523 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.727962 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.738301 1964718 addons.go:69] Setting storage-provisioner=true in profile "addons-122411"
	I0314 00:21:37.738343 1964718 addons.go:234] Setting addon storage-provisioner=true in "addons-122411"
	I0314 00:21:37.738381 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.742378 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.760950 1964718 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-122411"
	I0314 00:21:37.761056 1964718 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-122411"
	I0314 00:21:37.761424 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.774259 1964718 addons.go:69] Setting volumesnapshots=true in profile "addons-122411"
	I0314 00:21:37.774361 1964718 addons.go:234] Setting addon volumesnapshots=true in "addons-122411"
	I0314 00:21:37.774433 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.796925 1964718 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0314 00:21:37.800444 1964718 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 00:21:37.800512 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0314 00:21:37.800614 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.816799 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0314 00:21:37.814370 1964718 addons.go:234] Setting addon default-storageclass=true in "addons-122411"
	I0314 00:21:37.775010 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.820744 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.826235 1964718 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0314 00:21:37.841845 1964718 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0314 00:21:37.841921 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0314 00:21:37.842001 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.854293 1964718 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0314 00:21:37.863311 1964718 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 00:21:37.863345 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 00:21:37.863426 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.860218 1964718 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0314 00:21:37.900830 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0314 00:21:37.900901 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0314 00:21:37.901007 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:37.908078 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:37.860230 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0314 00:21:37.860736 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:37.993637 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:37.995714 1964718 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0314 00:21:37.998088 1964718 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 00:21:37.998153 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0314 00:21:37.998272 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.058336 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0314 00:21:38.055461 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.056323 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.060034 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.073817 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0314 00:21:38.077413 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0314 00:21:38.077002 1964718 out.go:177]   - Using image docker.io/registry:2.8.3
	I0314 00:21:38.077317 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0314 00:21:38.080983 1964718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 00:21:38.082848 1964718 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:21:38.082869 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 00:21:38.082940 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.090116 1964718 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0314 00:21:38.092020 1964718 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0314 00:21:38.092043 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0314 00:21:38.092108 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.097295 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0314 00:21:38.097201 1964718 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-122411"
	I0314 00:21:38.116011 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:38.116559 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:38.135086 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0314 00:21:38.134682 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0314 00:21:38.147733 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0314 00:21:38.147691 1964718 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0314 00:21:38.150086 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0314 00:21:38.150101 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0314 00:21:38.150155 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.168042 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.148261 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.211545 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.212808 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.222965 1964718 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 00:21:38.231470 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 00:21:38.231559 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.233997 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.279535 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.309627 1964718 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0314 00:21:38.306730 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.315061 1964718 out.go:177]   - Using image docker.io/busybox:stable
	I0314 00:21:38.316876 1964718 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 00:21:38.316895 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0314 00:21:38.316961 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:38.325150 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.335129 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.365040 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:38.511856 1964718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 00:21:38.512035 1964718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 00:21:38.611870 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 00:21:38.669031 1964718 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 00:21:38.669062 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0314 00:21:38.680353 1964718 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0314 00:21:38.680433 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0314 00:21:38.753205 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 00:21:38.761307 1964718 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0314 00:21:38.761338 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0314 00:21:38.771369 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 00:21:38.849172 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0314 00:21:38.852120 1964718 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 00:21:38.852156 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 00:21:38.879955 1964718 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0314 00:21:38.879980 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0314 00:21:38.888918 1964718 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0314 00:21:38.888947 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0314 00:21:38.890143 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0314 00:21:38.890162 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0314 00:21:38.914447 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 00:21:38.924225 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 00:21:38.968177 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0314 00:21:38.968241 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0314 00:21:38.971429 1964718 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0314 00:21:38.971491 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0314 00:21:39.017281 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 00:21:39.025659 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0314 00:21:39.025737 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0314 00:21:39.088747 1964718 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:21:39.088823 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 00:21:39.094028 1964718 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0314 00:21:39.094104 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0314 00:21:39.134295 1964718 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0314 00:21:39.134370 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0314 00:21:39.196850 1964718 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0314 00:21:39.196925 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0314 00:21:39.213156 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0314 00:21:39.213432 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0314 00:21:39.239100 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0314 00:21:39.239164 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0314 00:21:39.284104 1964718 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0314 00:21:39.284166 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0314 00:21:39.293521 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0314 00:21:39.301463 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 00:21:39.374138 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0314 00:21:39.374218 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0314 00:21:39.382536 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0314 00:21:39.382605 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0314 00:21:39.452308 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0314 00:21:39.452375 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0314 00:21:39.683553 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0314 00:21:39.874258 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0314 00:21:39.874329 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0314 00:21:39.878128 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0314 00:21:39.878191 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0314 00:21:39.881611 1964718 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 00:21:39.881676 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0314 00:21:40.182375 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 00:21:40.234620 1964718 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0314 00:21:40.234697 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0314 00:21:40.244740 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0314 00:21:40.244816 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0314 00:21:40.453864 1964718 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 00:21:40.453933 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0314 00:21:40.486350 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0314 00:21:40.486426 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0314 00:21:40.546975 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 00:21:40.702584 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0314 00:21:40.702648 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0314 00:21:41.114155 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0314 00:21:41.114229 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0314 00:21:41.403721 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0314 00:21:41.403782 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0314 00:21:41.610565 1964718 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 00:21:41.610636 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0314 00:21:41.620818 1964718 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.108739391s)
	I0314 00:21:41.621755 1964718 node_ready.go:35] waiting up to 6m0s for node "addons-122411" to be "Ready" ...
	I0314 00:21:41.621841 1964718 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.109907267s)
	I0314 00:21:41.621958 1964718 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0314 00:21:41.631490 1964718 node_ready.go:49] node "addons-122411" has status "Ready":"True"
	I0314 00:21:41.631565 1964718 node_ready.go:38] duration metric: took 9.674573ms for node "addons-122411" to be "Ready" ...
	I0314 00:21:41.631589 1964718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:21:41.641725 1964718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9528m" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:42.007994 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 00:21:42.126144 1964718 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-122411" context rescaled to 1 replicas
	I0314 00:21:42.644415 1964718 pod_ready.go:97] error getting pod "coredns-5dd5756b68-9528m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-9528m" not found
	I0314 00:21:42.644444 1964718 pod_ready.go:81] duration metric: took 1.002642471s for pod "coredns-5dd5756b68-9528m" in "kube-system" namespace to be "Ready" ...
	E0314 00:21:42.644456 1964718 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-9528m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-9528m" not found
	I0314 00:21:42.644464 1964718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:44.713259 1964718 pod_ready.go:102] pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:44.868468 1964718 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0314 00:21:44.868574 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:44.910978 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:45.530151 1964718 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0314 00:21:45.682730 1964718 addons.go:234] Setting addon gcp-auth=true in "addons-122411"
	I0314 00:21:45.682832 1964718 host.go:66] Checking if "addons-122411" exists ...
	I0314 00:21:45.683368 1964718 cli_runner.go:164] Run: docker container inspect addons-122411 --format={{.State.Status}}
	I0314 00:21:45.707122 1964718 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0314 00:21:45.707181 1964718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-122411
	I0314 00:21:45.732796 1964718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35041 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/addons-122411/id_rsa Username:docker}
	I0314 00:21:46.585866 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.9739045s)
	I0314 00:21:46.585958 1964718 addons.go:470] Verifying addon ingress=true in "addons-122411"
	I0314 00:21:46.588791 1964718 out.go:177] * Verifying ingress addon...
	I0314 00:21:46.586234 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.832939028s)
	I0314 00:21:46.586265 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.814866639s)
	I0314 00:21:46.586286 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.737086902s)
	I0314 00:21:46.586340 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.671815349s)
	I0314 00:21:46.586366 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.662082881s)
	I0314 00:21:46.586455 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.292858486s)
	I0314 00:21:46.586512 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.284974602s)
	I0314 00:21:46.586541 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.902915724s)
	I0314 00:21:46.586619 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.40416971s)
	I0314 00:21:46.586660 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.039401646s)
	I0314 00:21:46.586677 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.569047562s)
	I0314 00:21:46.591525 1964718 addons.go:470] Verifying addon registry=true in "addons-122411"
	I0314 00:21:46.593920 1964718 out.go:177] * Verifying registry addon...
	I0314 00:21:46.591974 1964718 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0314 00:21:46.592023 1964718 addons.go:470] Verifying addon metrics-server=true in "addons-122411"
	W0314 00:21:46.592044 1964718 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 00:21:46.597472 1964718 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0314 00:21:46.598732 1964718 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-122411 service yakd-dashboard -n yakd-dashboard
	
	I0314 00:21:46.598843 1964718 retry.go:31] will retry after 208.901216ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 00:21:46.608579 1964718 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0314 00:21:46.608603 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:46.609195 1964718 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0314 00:21:46.609206 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0314 00:21:46.615484 1964718 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0314 00:21:46.811325 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 00:21:47.115176 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:47.116119 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:47.162724 1964718 pod_ready.go:102] pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:47.616979 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:47.617826 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:47.977792 1964718 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.27063935s)
	I0314 00:21:47.980913 1964718 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 00:21:47.978048 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.969953436s)
	I0314 00:21:47.981069 1964718 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-122411"
	I0314 00:21:47.983319 1964718 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0314 00:21:47.986520 1964718 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0314 00:21:47.986544 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0314 00:21:47.986449 1964718 out.go:177] * Verifying csi-hostpath-driver addon...
	I0314 00:21:47.990265 1964718 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0314 00:21:48.010920 1964718 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0314 00:21:48.010944 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:48.037646 1964718 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0314 00:21:48.037722 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0314 00:21:48.097928 1964718 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 00:21:48.098001 1964718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0314 00:21:48.108412 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:48.111510 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:48.179283 1964718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 00:21:48.498259 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:48.606767 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:48.608047 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:48.923245 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.111847183s)
	I0314 00:21:48.997776 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:49.106542 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:49.107746 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:49.420412 1964718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.241042139s)
	I0314 00:21:49.423249 1964718 addons.go:470] Verifying addon gcp-auth=true in "addons-122411"
	I0314 00:21:49.425715 1964718 out.go:177] * Verifying gcp-auth addon...
	I0314 00:21:49.428342 1964718 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0314 00:21:49.445995 1964718 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0314 00:21:49.446016 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:49.496916 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:49.603518 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:49.604923 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:49.651939 1964718 pod_ready.go:102] pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:49.932559 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:49.996611 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:50.106452 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:50.107129 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:50.432831 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:50.498135 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:50.605119 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:50.605984 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:50.932415 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:51.001255 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:51.102990 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:51.104424 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:51.432865 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:51.497289 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:51.609339 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:51.611920 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:51.652822 1964718 pod_ready.go:102] pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:51.933410 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:52.008106 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:52.108656 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:52.114019 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:52.435935 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:52.498794 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:52.606228 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:52.606641 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:52.932872 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:53.002595 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:53.112019 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:53.112224 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:53.433080 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:53.498731 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:53.607359 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:53.624991 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:53.652595 1964718 pod_ready.go:92] pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:53.652636 1964718 pod_ready.go:81] duration metric: took 11.00816414s for pod "coredns-5dd5756b68-zfmbr" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.652649 1964718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.662987 1964718 pod_ready.go:92] pod "etcd-addons-122411" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:53.663017 1964718 pod_ready.go:81] duration metric: took 10.356275ms for pod "etcd-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.663034 1964718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.669353 1964718 pod_ready.go:92] pod "kube-apiserver-addons-122411" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:53.669392 1964718 pod_ready.go:81] duration metric: took 6.341374ms for pod "kube-apiserver-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.669405 1964718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.684480 1964718 pod_ready.go:92] pod "kube-controller-manager-addons-122411" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:53.684508 1964718 pod_ready.go:81] duration metric: took 15.096028ms for pod "kube-controller-manager-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.684521 1964718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l8qg6" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.694920 1964718 pod_ready.go:92] pod "kube-proxy-l8qg6" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:53.694944 1964718 pod_ready.go:81] duration metric: took 10.415623ms for pod "kube-proxy-l8qg6" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.694955 1964718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:53.933265 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:53.997523 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:54.050762 1964718 pod_ready.go:92] pod "kube-scheduler-addons-122411" in "kube-system" namespace has status "Ready":"True"
	I0314 00:21:54.050796 1964718 pod_ready.go:81] duration metric: took 355.831579ms for pod "kube-scheduler-addons-122411" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:54.050811 1964718 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace to be "Ready" ...
	I0314 00:21:54.105597 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:54.106806 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:54.433284 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:54.496487 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:54.603344 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:54.605818 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:54.932745 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:54.995882 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:55.105702 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:55.108204 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:55.432739 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:55.497271 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:55.605761 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:55.606917 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:55.935847 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:55.997244 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:56.058395 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:56.109254 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:56.110119 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:56.431876 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:56.496837 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:56.604491 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:56.605399 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:56.934265 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:56.997460 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:57.106219 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:57.106491 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:57.434978 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:57.497700 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:57.605387 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:57.607996 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:57.933149 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:57.996979 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:58.102468 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:58.105576 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:58.432344 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:58.495950 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:58.557136 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:21:58.604204 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:58.614231 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:58.933302 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:58.996098 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:59.105441 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:59.105904 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:59.434671 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:59.496434 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:21:59.602206 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:21:59.604190 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:21:59.932462 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:21:59.996776 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:00.110706 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:00.125318 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:00.433023 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:00.498549 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:00.559100 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:00.602179 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:00.604987 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:00.935101 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:00.997290 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:01.104364 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:01.104958 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:01.431968 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:01.496237 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:01.604793 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:01.607148 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:01.931932 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:01.997823 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:02.104383 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:02.107891 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:02.432864 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:02.496242 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:02.604417 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:02.605073 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 00:22:02.934506 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:02.997093 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:03.062496 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:03.104641 1964718 kapi.go:107] duration metric: took 16.507168622s to wait for kubernetes.io/minikube-addons=registry ...
	I0314 00:22:03.104910 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:03.440636 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:03.497192 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:03.605555 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:03.935891 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:03.996579 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:04.103625 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:04.432559 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:04.497354 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:04.602873 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:04.933101 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:05.006519 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:05.102532 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:05.432617 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:05.502643 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:05.559183 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:05.603390 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:05.932459 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:05.996606 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:06.103828 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:06.436449 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:06.496714 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:06.602601 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:06.933302 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:06.996573 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:07.102126 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:07.431949 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:07.498815 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:07.604409 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:07.934902 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:08.003123 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:08.072681 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:08.102109 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:08.434118 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:08.501174 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:08.602770 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:08.932951 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:08.996580 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:09.102821 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:09.434768 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:09.497037 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:09.603117 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:09.932256 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:09.997293 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:10.104520 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:10.432281 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:10.495812 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:10.557367 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:10.602198 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:10.932148 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:10.996710 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:11.103588 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:11.432706 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:11.499648 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:11.606653 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:11.932862 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:12.010556 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:12.120071 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:12.431761 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:12.501820 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:12.557605 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:12.605844 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:12.933438 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:12.997968 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:13.102461 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:13.432939 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:13.495873 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:13.602563 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:13.932937 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:13.997076 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:14.102334 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:14.432362 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:14.498079 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:14.602235 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:14.932287 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:14.995657 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:15.059592 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:15.102901 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:15.432968 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:15.496922 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:15.608173 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:15.935000 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:15.997865 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:16.102842 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:16.432566 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:16.495820 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:16.603750 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:16.932715 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:16.996140 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:17.059942 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:17.102858 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:17.432937 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:17.495794 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:17.602301 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:17.932439 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:17.996552 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:18.102675 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:18.432788 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:18.496648 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:18.602116 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:18.932888 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:19.008597 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:19.103305 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:19.432277 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:19.496166 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:19.558341 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:19.602848 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:19.933188 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:19.996560 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:20.103192 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:20.432938 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:20.496549 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:20.602507 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:20.932794 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:20.995535 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:21.103294 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:21.432781 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:21.496355 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:21.560843 1964718 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"False"
	I0314 00:22:21.602173 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:21.939090 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:21.998640 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:22.104520 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:22.432986 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:22.496352 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:22.602728 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:22.932976 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:22.996425 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:23.103652 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:23.432765 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:23.496970 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:23.602914 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:23.933068 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:23.998170 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:24.060383 1964718 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace has status "Ready":"True"
	I0314 00:22:24.060411 1964718 pod_ready.go:81] duration metric: took 30.009590889s for pod "nvidia-device-plugin-daemonset-98jl7" in "kube-system" namespace to be "Ready" ...
	I0314 00:22:24.060421 1964718 pod_ready.go:38] duration metric: took 42.42880696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 00:22:24.060436 1964718 api_server.go:52] waiting for apiserver process to appear ...
	I0314 00:22:24.060500 1964718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:22:24.079313 1964718 api_server.go:72] duration metric: took 46.470759052s to wait for apiserver process to appear ...
	I0314 00:22:24.079342 1964718 api_server.go:88] waiting for apiserver healthz status ...
	I0314 00:22:24.079396 1964718 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0314 00:22:24.088637 1964718 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0314 00:22:24.090211 1964718 api_server.go:141] control plane version: v1.28.4
	I0314 00:22:24.090247 1964718 api_server.go:131] duration metric: took 10.896489ms to wait for apiserver health ...
	I0314 00:22:24.090259 1964718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 00:22:24.100919 1964718 system_pods.go:59] 18 kube-system pods found
	I0314 00:22:24.100959 1964718 system_pods.go:61] "coredns-5dd5756b68-zfmbr" [2549e865-f13d-45db-bd8c-a463d0ede910] Running
	I0314 00:22:24.100966 1964718 system_pods.go:61] "csi-hostpath-attacher-0" [75f3d193-77c8-4f59-ba26-435c0f9de530] Running
	I0314 00:22:24.100971 1964718 system_pods.go:61] "csi-hostpath-resizer-0" [e6388b1f-413f-4779-ba10-727304b97804] Running
	I0314 00:22:24.100979 1964718 system_pods.go:61] "csi-hostpathplugin-tvkjl" [8990c7d4-be70-4eb4-98b8-725f79750022] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 00:22:24.100985 1964718 system_pods.go:61] "etcd-addons-122411" [623dde03-3da0-4fad-b33e-2498c50c1685] Running
	I0314 00:22:24.100991 1964718 system_pods.go:61] "kindnet-84kzz" [564bf94d-cd62-43fb-9f84-9976346805b2] Running
	I0314 00:22:24.100995 1964718 system_pods.go:61] "kube-apiserver-addons-122411" [e6842e57-b8af-481a-89f1-c237561993cb] Running
	I0314 00:22:24.101000 1964718 system_pods.go:61] "kube-controller-manager-addons-122411" [08756eb3-4411-4d66-b998-552fd2d71228] Running
	I0314 00:22:24.101011 1964718 system_pods.go:61] "kube-ingress-dns-minikube" [c3eb37ea-3bfe-44e7-9383-d812dede7b99] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 00:22:24.101022 1964718 system_pods.go:61] "kube-proxy-l8qg6" [41ad03f4-7ab5-4995-9c6d-18b18abf5c70] Running
	I0314 00:22:24.101027 1964718 system_pods.go:61] "kube-scheduler-addons-122411" [d100c13d-299b-4a39-8b49-f777bf7480bb] Running
	I0314 00:22:24.101031 1964718 system_pods.go:61] "metrics-server-69cf46c98-5qtdh" [fe7998ae-210d-4e08-81f9-3e7f19032943] Running
	I0314 00:22:24.101037 1964718 system_pods.go:61] "nvidia-device-plugin-daemonset-98jl7" [3e3aa7cb-5082-4ad3-bd32-fb855ec98c06] Running
	I0314 00:22:24.101047 1964718 system_pods.go:61] "registry-k7jx6" [aaa61793-7482-468e-9a48-807a12f2eae9] Running
	I0314 00:22:24.101053 1964718 system_pods.go:61] "registry-proxy-ksf2h" [3c344254-4937-4c56-8655-cc99d0982dbf] Running
	I0314 00:22:24.101057 1964718 system_pods.go:61] "snapshot-controller-58dbcc7b99-k5hps" [9909c6c9-0e31-4f6d-9c10-0b8da1c4208b] Running
	I0314 00:22:24.101061 1964718 system_pods.go:61] "snapshot-controller-58dbcc7b99-s5lrs" [08f61656-83f2-49b2-8154-eaec3e1e35f0] Running
	I0314 00:22:24.101073 1964718 system_pods.go:61] "storage-provisioner" [ae7009c0-b2f3-440b-a120-b008e703c335] Running
	I0314 00:22:24.101080 1964718 system_pods.go:74] duration metric: took 10.814324ms to wait for pod list to return data ...
	I0314 00:22:24.101089 1964718 default_sa.go:34] waiting for default service account to be created ...
	I0314 00:22:24.103877 1964718 default_sa.go:45] found service account: "default"
	I0314 00:22:24.103905 1964718 default_sa.go:55] duration metric: took 2.80742ms for default service account to be created ...
	I0314 00:22:24.103915 1964718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 00:22:24.106693 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:24.114994 1964718 system_pods.go:86] 18 kube-system pods found
	I0314 00:22:24.115036 1964718 system_pods.go:89] "coredns-5dd5756b68-zfmbr" [2549e865-f13d-45db-bd8c-a463d0ede910] Running
	I0314 00:22:24.115044 1964718 system_pods.go:89] "csi-hostpath-attacher-0" [75f3d193-77c8-4f59-ba26-435c0f9de530] Running
	I0314 00:22:24.115049 1964718 system_pods.go:89] "csi-hostpath-resizer-0" [e6388b1f-413f-4779-ba10-727304b97804] Running
	I0314 00:22:24.115082 1964718 system_pods.go:89] "csi-hostpathplugin-tvkjl" [8990c7d4-be70-4eb4-98b8-725f79750022] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 00:22:24.115095 1964718 system_pods.go:89] "etcd-addons-122411" [623dde03-3da0-4fad-b33e-2498c50c1685] Running
	I0314 00:22:24.115102 1964718 system_pods.go:89] "kindnet-84kzz" [564bf94d-cd62-43fb-9f84-9976346805b2] Running
	I0314 00:22:24.115106 1964718 system_pods.go:89] "kube-apiserver-addons-122411" [e6842e57-b8af-481a-89f1-c237561993cb] Running
	I0314 00:22:24.115113 1964718 system_pods.go:89] "kube-controller-manager-addons-122411" [08756eb3-4411-4d66-b998-552fd2d71228] Running
	I0314 00:22:24.115126 1964718 system_pods.go:89] "kube-ingress-dns-minikube" [c3eb37ea-3bfe-44e7-9383-d812dede7b99] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 00:22:24.115130 1964718 system_pods.go:89] "kube-proxy-l8qg6" [41ad03f4-7ab5-4995-9c6d-18b18abf5c70] Running
	I0314 00:22:24.115135 1964718 system_pods.go:89] "kube-scheduler-addons-122411" [d100c13d-299b-4a39-8b49-f777bf7480bb] Running
	I0314 00:22:24.115171 1964718 system_pods.go:89] "metrics-server-69cf46c98-5qtdh" [fe7998ae-210d-4e08-81f9-3e7f19032943] Running
	I0314 00:22:24.115183 1964718 system_pods.go:89] "nvidia-device-plugin-daemonset-98jl7" [3e3aa7cb-5082-4ad3-bd32-fb855ec98c06] Running
	I0314 00:22:24.115188 1964718 system_pods.go:89] "registry-k7jx6" [aaa61793-7482-468e-9a48-807a12f2eae9] Running
	I0314 00:22:24.115192 1964718 system_pods.go:89] "registry-proxy-ksf2h" [3c344254-4937-4c56-8655-cc99d0982dbf] Running
	I0314 00:22:24.115217 1964718 system_pods.go:89] "snapshot-controller-58dbcc7b99-k5hps" [9909c6c9-0e31-4f6d-9c10-0b8da1c4208b] Running
	I0314 00:22:24.115228 1964718 system_pods.go:89] "snapshot-controller-58dbcc7b99-s5lrs" [08f61656-83f2-49b2-8154-eaec3e1e35f0] Running
	I0314 00:22:24.115233 1964718 system_pods.go:89] "storage-provisioner" [ae7009c0-b2f3-440b-a120-b008e703c335] Running
	I0314 00:22:24.115250 1964718 system_pods.go:126] duration metric: took 11.319896ms to wait for k8s-apps to be running ...
	I0314 00:22:24.115266 1964718 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 00:22:24.115339 1964718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:22:24.128488 1964718 system_svc.go:56] duration metric: took 13.213107ms WaitForService to wait for kubelet
	I0314 00:22:24.128518 1964718 kubeadm.go:576] duration metric: took 46.519971796s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 00:22:24.128539 1964718 node_conditions.go:102] verifying NodePressure condition ...
	I0314 00:22:24.133042 1964718 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0314 00:22:24.133078 1964718 node_conditions.go:123] node cpu capacity is 2
	I0314 00:22:24.133091 1964718 node_conditions.go:105] duration metric: took 4.546369ms to run NodePressure ...
	I0314 00:22:24.133104 1964718 start.go:240] waiting for startup goroutines ...
	I0314 00:22:24.432927 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:24.497158 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:24.602729 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:24.932706 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:24.996853 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:25.103240 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:25.433916 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:25.497874 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:25.602404 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:25.934524 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:25.997035 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:26.102907 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:26.436214 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:26.496509 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:26.602742 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:26.933614 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:26.997820 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:27.103100 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:27.432687 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:27.496952 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:27.602532 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:27.932890 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:27.996797 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:28.103341 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:28.432434 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:28.496821 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:28.602525 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:28.933370 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:28.996507 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:29.102464 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:29.432624 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:29.496614 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:29.602856 1964718 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 00:22:29.934775 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:29.997409 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:30.105484 1964718 kapi.go:107] duration metric: took 43.513505491s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0314 00:22:30.432898 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:30.496704 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:30.933574 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:30.997265 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:31.432822 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:31.496580 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:31.934298 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:31.995662 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:32.432470 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 00:22:32.495834 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:32.933070 1964718 kapi.go:107] duration metric: took 43.504726108s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0314 00:22:32.935180 1964718 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-122411 cluster.
	I0314 00:22:32.937329 1964718 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0314 00:22:32.939888 1964718 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0314 00:22:32.996372 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:33.495854 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:33.998234 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:34.496669 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:34.995650 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:35.496681 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:35.996333 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:36.495694 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:37.003121 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:37.495538 1964718 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 00:22:37.996431 1964718 kapi.go:107] duration metric: took 50.006162528s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0314 00:22:38.000840 1964718 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner, inspektor-gadget, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0314 00:22:38.002754 1964718 addons.go:505] duration metric: took 1m0.393893567s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner inspektor-gadget metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0314 00:22:38.002832 1964718 start.go:245] waiting for cluster config update ...
	I0314 00:22:38.002856 1964718 start.go:254] writing updated cluster config ...
	I0314 00:22:38.003318 1964718 ssh_runner.go:195] Run: rm -f paused
	I0314 00:22:38.341418 1964718 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 00:22:38.343557 1964718 out.go:177] * Done! kubectl is now configured to use "addons-122411" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	42811fc06c380       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app                          2                   445e852e07026       hello-world-app-5d77478584-ktm7f
	be7e5417828a8       be5e6f23a9904       34 seconds ago       Running             nginx                                    0                   dc37f04ac7685       nginx
	8f6aedeb20a63       ee6d597e62dc8       About a minute ago   Exited              csi-snapshotter                          0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	fefe2ff132232       642ded511e141       About a minute ago   Exited              csi-provisioner                          0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	95780c4d10a9e       922312104da8a       About a minute ago   Exited              liveness-probe                           0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	a6a4c9e531ee9       08f6b2990811a       About a minute ago   Exited              hostpath                                 0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	99e901f3c5fb8       bafe72500920c       About a minute ago   Running             gcp-auth                                 0                   e20939c7f2b4b       gcp-auth-5f6b4f85fd-86hlm
	1c5c881a17d34       0107d56dbc0be       About a minute ago   Exited              node-driver-registrar                    0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	be84f20a0ed63       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   4e6865a7694d8       nvidia-device-plugin-daemonset-98jl7
	89172c0dbafb4       487fa743e1e22       About a minute ago   Exited              csi-resizer                              0                   c25d3fd5a1ad4       csi-hostpath-resizer-0
	e5b57a2274e94       1461903ec4fe9       About a minute ago   Exited              csi-external-health-monitor-controller   0                   21d4baa31a6d3       csi-hostpathplugin-tvkjl
	f5b7287eb555c       9a80d518f102c       About a minute ago   Exited              csi-attacher                             0                   1582b6e8c5198       csi-hostpath-attacher-0
	0d1480a78a678       1a024e390dd05       About a minute ago   Exited              patch                                    0                   69778a54da324       ingress-nginx-admission-patch-l4g59
	80571bfc05cd1       1a024e390dd05       About a minute ago   Exited              create                                   0                   b2dafd1304c4a       ingress-nginx-admission-create-cqgx5
	8fcf16cf0b609       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   526aa2706b714       snapshot-controller-58dbcc7b99-k5hps
	78bf62845fe81       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   bb0667de329c4       local-path-provisioner-78b46b4d5c-8mmnx
	e6813e18710ba       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   56dedca96e2c1       snapshot-controller-58dbcc7b99-s5lrs
	afd6df3e13aa8       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   828008ef71f21       yakd-dashboard-9947fc6bf-7xh2x
	5ecc0e7602972       97e04611ad434       About a minute ago   Running             coredns                                  0                   f36271ff34e70       coredns-5dd5756b68-zfmbr
	900f595fcc633       41340d5d57adb       2 minutes ago        Running             cloud-spanner-emulator                   0                   4fadc2659c6f1       cloud-spanner-emulator-6548d5df46-9pvwd
	61ca94ba68816       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   ba5ef7c66edfd       storage-provisioner
	fd41927d5b596       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                              0                   d2856623ff829       kindnet-84kzz
	7efda6344d4a8       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   5e4e6a9aa966c       kube-proxy-l8qg6
	bedee29a8b768       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   5acc5af3027da       kube-scheduler-addons-122411
	e3a90640ceaa5       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   27e5220ee5804       kube-controller-manager-addons-122411
	4a5e785fd0a00       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   96da72f60ca02       kube-apiserver-addons-122411
	6011bc5d47236       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   6c08783c36256       etcd-addons-122411
	
	
	==> containerd <==
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.368878868Z" level=info msg="shim disconnected" id=1582b6e8c51982c8fc5e9ceb974816e9698f6efadd012b8a96b440d0a4db7e8e
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.368935097Z" level=warning msg="cleaning up after shim disconnected" id=1582b6e8c51982c8fc5e9ceb974816e9698f6efadd012b8a96b440d0a4db7e8e namespace=k8s.io
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.368945715Z" level=info msg="cleaning up dead shim"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369232785Z" level=info msg="StopPodSandbox for \"21d4baa31a6d329eab1ed1aea9ab5dd0b90553141a15930de90aa7d09cba6082\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369303365Z" level=info msg="Container to stop \"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369320809Z" level=info msg="Container to stop \"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369335103Z" level=info msg="Container to stop \"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369348846Z" level=info msg="Container to stop \"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369364624Z" level=info msg="Container to stop \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.369378114Z" level=info msg="Container to stop \"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.395579935Z" level=warning msg="cleanup warnings time=\"2024-03-14T00:23:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8954 runtime=io.containerd.runc.v2\n"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.432265854Z" level=info msg="shim disconnected" id=c25d3fd5a1ad4024175bb6d6922efcd6a0d021190e774b411a852450127a9982
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.432591718Z" level=warning msg="cleaning up after shim disconnected" id=c25d3fd5a1ad4024175bb6d6922efcd6a0d021190e774b411a852450127a9982 namespace=k8s.io
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.432610229Z" level=info msg="cleaning up dead shim"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.435753196Z" level=info msg="shim disconnected" id=21d4baa31a6d329eab1ed1aea9ab5dd0b90553141a15930de90aa7d09cba6082
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.435818623Z" level=warning msg="cleaning up after shim disconnected" id=21d4baa31a6d329eab1ed1aea9ab5dd0b90553141a15930de90aa7d09cba6082 namespace=k8s.io
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.435830217Z" level=info msg="cleaning up dead shim"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.448259137Z" level=warning msg="cleanup warnings time=\"2024-03-14T00:23:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9013 runtime=io.containerd.runc.v2\n"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.454255162Z" level=warning msg="cleanup warnings time=\"2024-03-14T00:23:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9018 runtime=io.containerd.runc.v2\n"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.456202511Z" level=info msg="TearDown network for sandbox \"1582b6e8c51982c8fc5e9ceb974816e9698f6efadd012b8a96b440d0a4db7e8e\" successfully"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.456247894Z" level=info msg="StopPodSandbox for \"1582b6e8c51982c8fc5e9ceb974816e9698f6efadd012b8a96b440d0a4db7e8e\" returns successfully"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.512160491Z" level=info msg="TearDown network for sandbox \"c25d3fd5a1ad4024175bb6d6922efcd6a0d021190e774b411a852450127a9982\" successfully"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.512213963Z" level=info msg="StopPodSandbox for \"c25d3fd5a1ad4024175bb6d6922efcd6a0d021190e774b411a852450127a9982\" returns successfully"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.537973294Z" level=info msg="TearDown network for sandbox \"21d4baa31a6d329eab1ed1aea9ab5dd0b90553141a15930de90aa7d09cba6082\" successfully"
	Mar 14 00:23:49 addons-122411 containerd[767]: time="2024-03-14T00:23:49.538023960Z" level=info msg="StopPodSandbox for \"21d4baa31a6d329eab1ed1aea9ab5dd0b90553141a15930de90aa7d09cba6082\" returns successfully"
	
	
	==> coredns [5ecc0e760297260d85e3dc04184a1a6386d9ab0f8dd1c7586ebf98d6d6cbf8ce] <==
	[INFO] 10.244.0.19:43348 - 52215 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000149594s
	[INFO] 10.244.0.19:43348 - 4900 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.009501425s
	[INFO] 10.244.0.19:49661 - 37165 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.009487443s
	[INFO] 10.244.0.19:43348 - 6641 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.01349177s
	[INFO] 10.244.0.19:49661 - 47428 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.01379471s
	[INFO] 10.244.0.19:49661 - 1726 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00018445s
	[INFO] 10.244.0.19:43348 - 34644 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000057731s
	[INFO] 10.244.0.19:57925 - 9916 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000087893s
	[INFO] 10.244.0.19:50716 - 14509 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076487s
	[INFO] 10.244.0.19:57925 - 50671 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043962s
	[INFO] 10.244.0.19:50716 - 59698 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061021s
	[INFO] 10.244.0.19:57925 - 45775 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077152s
	[INFO] 10.244.0.19:50716 - 57642 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058001s
	[INFO] 10.244.0.19:57925 - 35638 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040147s
	[INFO] 10.244.0.19:50716 - 9934 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003689s
	[INFO] 10.244.0.19:57925 - 58247 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042995s
	[INFO] 10.244.0.19:50716 - 58398 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041321s
	[INFO] 10.244.0.19:57925 - 54818 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000405s
	[INFO] 10.244.0.19:50716 - 33444 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000036505s
	[INFO] 10.244.0.19:57925 - 19605 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001280721s
	[INFO] 10.244.0.19:50716 - 8377 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001275781s
	[INFO] 10.244.0.19:57925 - 35710 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001079426s
	[INFO] 10.244.0.19:50716 - 14841 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001064772s
	[INFO] 10.244.0.19:57925 - 4722 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00005179s
	[INFO] 10.244.0.19:50716 - 57692 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003506s
	
	
	==> describe nodes <==
	Name:               addons-122411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-122411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=addons-122411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T00_21_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-122411
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 00:21:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-122411
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 00:23:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 00:23:28 +0000   Thu, 14 Mar 2024 00:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 00:23:28 +0000   Thu, 14 Mar 2024 00:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 00:23:28 +0000   Thu, 14 Mar 2024 00:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 00:23:28 +0000   Thu, 14 Mar 2024 00:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-122411
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 8c735ed4ae0740e7bb1b9548d57bee0f
	  System UUID:                79f50cbe-c3ed-4396-addd-8d47d04f9fde
	  Boot ID:                    ae603cd7-e506-4ea2-a0e0-984864774a93
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-9pvwd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  default                     hello-world-app-5d77478584-ktm7f           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  gcp-auth                    gcp-auth-5f6b4f85fd-86hlm                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  kube-system                 coredns-5dd5756b68-zfmbr                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m12s
	  kube-system                 etcd-addons-122411                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m25s
	  kube-system                 kindnet-84kzz                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m13s
	  kube-system                 kube-apiserver-addons-122411               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 kube-controller-manager-addons-122411      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-proxy-l8qg6                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m13s
	  kube-system                 kube-scheduler-addons-122411               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  kube-system                 nvidia-device-plugin-daemonset-98jl7       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 snapshot-controller-58dbcc7b99-k5hps       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 snapshot-controller-58dbcc7b99-s5lrs       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  local-path-storage          local-path-provisioner-78b46b4d5c-8mmnx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-7xh2x             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m10s  kube-proxy       
	  Normal  Starting                 2m25s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m25s  kubelet          Node addons-122411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s  kubelet          Node addons-122411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s  kubelet          Node addons-122411 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m25s  kubelet          Node addons-122411 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m25s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m15s  kubelet          Node addons-122411 status is now: NodeReady
	  Normal  RegisteredNode           2m13s  node-controller  Node addons-122411 event: Registered Node addons-122411 in Controller
	
	
	==> dmesg <==
	[  +0.001356] FS-Cache: O-key=[8] '13435c0100000000'
	[  +0.000805] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001045] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000d8f48c7d
	[  +0.001277] FS-Cache: N-key=[8] '13435c0100000000'
	[  +0.002503] FS-Cache: Duplicate cookie detected
	[  +0.000822] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001194] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=00000000ab28f1bc
	[  +0.001304] FS-Cache: O-key=[8] '13435c0100000000'
	[  +0.000805] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=0000000005bfc23d
	[  +0.001296] FS-Cache: N-key=[8] '13435c0100000000'
	[  +1.948127] FS-Cache: Duplicate cookie detected
	[  +0.000743] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=0000000047416d4f
	[  +0.001160] FS-Cache: O-key=[8] '12435c0100000000'
	[  +0.000711] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000ddb7d408
	[  +0.001200] FS-Cache: N-key=[8] '12435c0100000000'
	[  +0.277874] FS-Cache: Duplicate cookie detected
	[  +0.000809] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=000000008040a2ac
	[  +0.001136] FS-Cache: O-key=[8] '18435c0100000000'
	[  +0.000845] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000985] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=0000000081a41d92
	[  +0.001094] FS-Cache: N-key=[8] '18435c0100000000'
	
	
	==> etcd [6011bc5d47236f232ce06b9abfd2a40cab18b43e83889bf618d2cea3add70cbe] <==
	{"level":"info","ts":"2024-03-14T00:21:18.748692Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-14T00:21:18.748706Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-14T00:21:18.749434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-14T00:21:18.749522Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-14T00:21:18.749581Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:21:18.749608Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:21:18.749616Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T00:21:19.315249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T00:21:19.315361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T00:21:19.315467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-14T00:21:19.315591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T00:21:19.315686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-14T00:21:19.315776Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-14T00:21:19.315887Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-14T00:21:19.322244Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:21:19.32684Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-122411 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T00:21:19.327135Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:21:19.328409Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-14T00:21:19.328694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T00:21:19.330165Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T00:21:19.330286Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T00:21:19.338789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T00:21:19.34258Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:21:19.342833Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T00:21:19.343241Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [99e901f3c5fb8c2e2dfe0b7d4507105ab75a009b136b1b41e0327d620a7de5d3] <==
	2024/03/14 00:22:31 GCP Auth Webhook started!
	2024/03/14 00:22:49 Ready to marshal response ...
	2024/03/14 00:22:49 Ready to write response ...
	2024/03/14 00:23:13 Ready to marshal response ...
	2024/03/14 00:23:13 Ready to write response ...
	2024/03/14 00:23:16 Ready to marshal response ...
	2024/03/14 00:23:16 Ready to write response ...
	2024/03/14 00:23:23 Ready to marshal response ...
	2024/03/14 00:23:23 Ready to write response ...
	2024/03/14 00:23:38 Ready to marshal response ...
	2024/03/14 00:23:38 Ready to write response ...
	
	
	==> kernel <==
	 00:23:50 up  8:06,  0 users,  load average: 2.68, 2.77, 2.73
	Linux addons-122411 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [fd41927d5b5960f3600296f2a202c97121894b97996da03ecdffc82c6916cb4d] <==
	I0314 00:21:42.011265       1 main.go:227] handling current node
	I0314 00:21:52.037526       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:21:52.037560       1 main.go:227] handling current node
	I0314 00:22:02.049411       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:02.049442       1 main.go:227] handling current node
	I0314 00:22:12.064246       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:12.064287       1 main.go:227] handling current node
	I0314 00:22:22.078123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:22.078162       1 main.go:227] handling current node
	I0314 00:22:32.090908       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:32.090938       1 main.go:227] handling current node
	I0314 00:22:42.098457       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:42.098490       1 main.go:227] handling current node
	I0314 00:22:52.110771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:22:52.110800       1 main.go:227] handling current node
	I0314 00:23:02.126577       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:23:02.126609       1 main.go:227] handling current node
	I0314 00:23:12.136941       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:23:12.136978       1 main.go:227] handling current node
	I0314 00:23:22.141711       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:23:22.141742       1 main.go:227] handling current node
	I0314 00:23:32.152771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:23:32.152801       1 main.go:227] handling current node
	I0314 00:23:42.284383       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0314 00:23:42.284415       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4a5e785fd0a00d963f357c98432c53e43acef2f961f3a29fc053f7ee1094d4ca] <==
	W0314 00:21:47.174705       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 00:21:47.648506       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.98.197.71"}
	I0314 00:21:47.674479       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0314 00:21:47.895869       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.97.102.11"}
	W0314 00:21:48.444121       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 00:21:49.234209       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.118.115"}
	E0314 00:22:03.321993       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.40.108:443: connect: connection refused
	W0314 00:22:03.322335       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 00:22:03.322394       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0314 00:22:03.322972       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.40.108:443: connect: connection refused
	I0314 00:22:03.323320       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0314 00:22:03.328943       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1: Get "https://10.105.40.108:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.105.40.108:443: connect: connection refused
	I0314 00:22:03.424995       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 00:22:21.774984       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 00:23:04.331914       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0314 00:23:07.747716       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0314 00:23:07.760663       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0314 00:23:08.778587       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0314 00:23:13.300543       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0314 00:23:13.626056       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.143.1"}
	I0314 00:23:23.359653       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.80.200"}
	I0314 00:23:28.030106       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [e3a90640ceaa5070a7e644a8c3ec86d0b11c16330a01f8b0a68bb54168056b8b] <==
	E0314 00:23:17.993143       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 00:23:23.089518       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0314 00:23:23.110930       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-ktm7f"
	I0314 00:23:23.138812       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.899042ms"
	I0314 00:23:23.169955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="31.092096ms"
	I0314 00:23:23.194806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="24.798736ms"
	I0314 00:23:23.195001       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.648µs"
	W0314 00:23:26.001202       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 00:23:26.001243       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 00:23:26.124664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="74.814µs"
	I0314 00:23:27.149972       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.495µs"
	I0314 00:23:28.173650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.777µs"
	I0314 00:23:30.576504       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 00:23:37.188492       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 00:23:37.419223       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 00:23:37.419269       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 00:23:38.157208       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0314 00:23:40.506800       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 00:23:40.506841       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 00:23:40.907079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="6.794µs"
	I0314 00:23:40.907378       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0314 00:23:40.918280       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0314 00:23:42.210496       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="116.799µs"
	I0314 00:23:48.901488       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0314 00:23:48.988235       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [7efda6344d4a8449a9484d899dc41810be1633baf6a5a5b65e012e0d40e73a53] <==
	I0314 00:21:39.889434       1 server_others.go:69] "Using iptables proxy"
	I0314 00:21:39.916316       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0314 00:21:39.959826       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0314 00:21:39.962443       1 server_others.go:152] "Using iptables Proxier"
	I0314 00:21:39.962483       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0314 00:21:39.962499       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0314 00:21:39.962535       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 00:21:39.962751       1 server.go:846] "Version info" version="v1.28.4"
	I0314 00:21:39.962765       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 00:21:39.971799       1 config.go:188] "Starting service config controller"
	I0314 00:21:39.971845       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 00:21:39.971868       1 config.go:97] "Starting endpoint slice config controller"
	I0314 00:21:39.971876       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 00:21:39.973627       1 config.go:315] "Starting node config controller"
	I0314 00:21:39.973641       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 00:21:40.072205       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 00:21:40.072275       1 shared_informer.go:318] Caches are synced for service config
	I0314 00:21:40.075092       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [bedee29a8b76861ab9447473f45d0309be1f6d3838aaa2897b7c88305a29d3c8] <==
	W0314 00:21:22.034246       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 00:21:22.034263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 00:21:22.034333       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 00:21:22.034356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 00:21:22.034413       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 00:21:22.034429       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 00:21:22.034589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 00:21:22.034691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 00:21:22.849944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 00:21:22.850204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 00:21:22.859619       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 00:21:22.859652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 00:21:22.883173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 00:21:22.883383       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 00:21:22.927842       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 00:21:22.928489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 00:21:22.929541       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 00:21:22.929720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 00:21:22.951107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 00:21:22.951161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 00:21:23.055327       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 00:21:23.055367       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 00:21:23.126197       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 00:21:23.126419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0314 00:21:25.004455       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.408920    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0"} err="failed to get container status \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.408950    1492 scope.go:117] "RemoveContainer" containerID="8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.409286    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef"} err="failed to get container status \"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.409313    1492 scope.go:117] "RemoveContainer" containerID="fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.409709    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be"} err="failed to get container status \"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be\": rpc error: code = NotFound desc = an error occurred when try to find container \"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.409736    1492 scope.go:117] "RemoveContainer" containerID="95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410080    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3"} err="failed to get container status \"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410111    1492 scope.go:117] "RemoveContainer" containerID="a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410442    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da"} err="failed to get container status \"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410468    1492 scope.go:117] "RemoveContainer" containerID="1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410907    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb"} err="failed to get container status \"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.410930    1492 scope.go:117] "RemoveContainer" containerID="e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.411326    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0"} err="failed to get container status \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.411354    1492 scope.go:117] "RemoveContainer" containerID="8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.411693    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef"} err="failed to get container status \"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f6aedeb20a63c51e31ebf66c078cb46b990f0c2e6ad8a1dbc8666f7080034ef\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.411719    1492 scope.go:117] "RemoveContainer" containerID="fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412090    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be"} err="failed to get container status \"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be\": rpc error: code = NotFound desc = an error occurred when try to find container \"fefe2ff132232a17d5cb6a717cf2a3434a946a917d0b6fc059004b017cbbd7be\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412120    1492 scope.go:117] "RemoveContainer" containerID="95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412455    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3"} err="failed to get container status \"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"95780c4d10a9e477127ccb65b77fb2c87377e43999e418a0b04b4580c35ac5f3\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412480    1492 scope.go:117] "RemoveContainer" containerID="a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412807    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da"} err="failed to get container status \"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6a4c9e531ee911d190f485629532c8a7a726b22512f13ad9b94386320bb44da\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.412834    1492 scope.go:117] "RemoveContainer" containerID="1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.413211    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb"} err="failed to get container status \"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c5c881a17d345a78c4e70cd883551cedcc834fdf4e5ef82b90204e4de7591bb\": not found"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.413270    1492 scope.go:117] "RemoveContainer" containerID="e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0"
	Mar 14 00:23:50 addons-122411 kubelet[1492]: I0314 00:23:50.413583    1492 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0"} err="failed to get container status \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5b57a2274e94c4fcb757fd326ce6dc630d6dd4ce269bcfde6c05b9095fc75b0\": not found"
	
	
	==> storage-provisioner [61ca94ba6881646e2b5eb6bd1606199b57a96f2190cd6475bbaf33bc8c153b2c] <==
	I0314 00:21:45.151246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 00:21:45.250752       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 00:21:45.250810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 00:21:45.348458       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 00:21:45.349268       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-122411_8c552f76-9839-4c14-914b-134ef230b25e!
	I0314 00:21:45.357062       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81f0a142-3e65-44d5-af86-bb2d2a0b6331", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-122411_8c552f76-9839-4c14-914b-134ef230b25e became leader
	I0314 00:21:45.450086       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-122411_8c552f76-9839-4c14-914b-134ef230b25e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-122411 -n addons-122411
helpers_test.go:261: (dbg) Run:  kubectl --context addons-122411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr: (3.407365524s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-362954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr: (3.191154434s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-362954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.69646249s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-362954
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 image load --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr: (3.622963738s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-362954" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image save gcr.io/google-containers/addon-resizer:functional-362954 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0314 00:28:51.750395 1993858 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:28:51.751362 1993858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:28:51.751402 1993858 out.go:304] Setting ErrFile to fd 2...
	I0314 00:28:51.751409 1993858 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:28:51.751886 1993858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:28:51.752587 1993858 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:28:51.752789 1993858 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:28:51.753779 1993858 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
	I0314 00:28:51.771436 1993858 ssh_runner.go:195] Run: systemctl --version
	I0314 00:28:51.771515 1993858 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
	I0314 00:28:51.789794 1993858 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
	I0314 00:28:51.888474 1993858 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0314 00:28:51.888533 1993858 cache_images.go:254] Failed to load cached images for profile functional-362954. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0314 00:28:51.888566 1993858 cache_images.go:262] succeeded pushing to: 
	I0314 00:28:51.888572 1993858 cache_images.go:263] failed pushing to: functional-362954

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (373.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-023742 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0314 01:06:38.465535 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-023742 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.57352381s)

                                                
                                                
-- stdout --
	* [old-k8s-version-023742] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-023742" primary control-plane node in "old-k8s-version-023742" cluster
	* Pulling base image v0.0.42-1710284843-18375 ...
	* Restarting existing docker container for "old-k8s-version-023742" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-023742 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 01:06:01.799114 2159335 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:06:01.799394 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:06:01.799423 2159335 out.go:304] Setting ErrFile to fd 2...
	I0314 01:06:01.799443 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:06:01.799709 2159335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 01:06:01.800129 2159335 out.go:298] Setting JSON to false
	I0314 01:06:01.801123 2159335 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31712,"bootTime":1710346650,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 01:06:01.801223 2159335 start.go:139] virtualization:  
	I0314 01:06:01.804113 2159335 out.go:177] * [old-k8s-version-023742] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 01:06:01.806863 2159335 notify.go:220] Checking for updates...
	I0314 01:06:01.806924 2159335 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:06:01.809402 2159335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:06:01.811450 2159335 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:06:01.813323 2159335 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 01:06:01.815649 2159335 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 01:06:01.817988 2159335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:06:01.820381 2159335 config.go:182] Loaded profile config "old-k8s-version-023742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0314 01:06:01.822874 2159335 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 01:06:01.825028 2159335 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:06:01.859354 2159335 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 01:06:01.859463 2159335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 01:06:01.946464 2159335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-14 01:06:01.935852922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 01:06:01.946585 2159335 docker.go:295] overlay module found
	I0314 01:06:01.949774 2159335 out.go:177] * Using the docker driver based on existing profile
	I0314 01:06:01.952901 2159335 start.go:297] selected driver: docker
	I0314 01:06:01.952929 2159335 start.go:901] validating driver "docker" against &{Name:old-k8s-version-023742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-023742 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:06:01.953048 2159335 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:06:01.953645 2159335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 01:06:02.084812 2159335 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-14 01:06:02.070327976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 01:06:02.085200 2159335 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:06:02.085247 2159335 cni.go:84] Creating CNI manager for ""
	I0314 01:06:02.085256 2159335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 01:06:02.085296 2159335 start.go:340] cluster config:
	{Name:old-k8s-version-023742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-023742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:06:02.089036 2159335 out.go:177] * Starting "old-k8s-version-023742" primary control-plane node in "old-k8s-version-023742" cluster
	I0314 01:06:02.090898 2159335 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 01:06:02.092783 2159335 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 01:06:02.094898 2159335 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 01:06:02.094966 2159335 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0314 01:06:02.094977 2159335 cache.go:56] Caching tarball of preloaded images
	I0314 01:06:02.095078 2159335 preload.go:173] Found /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 01:06:02.095087 2159335 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0314 01:06:02.095249 2159335 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 01:06:02.095492 2159335 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/config.json ...
	I0314 01:06:02.113781 2159335 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0314 01:06:02.113804 2159335 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0314 01:06:02.113828 2159335 cache.go:194] Successfully downloaded all kic artifacts
	I0314 01:06:02.113858 2159335 start.go:360] acquireMachinesLock for old-k8s-version-023742: {Name:mk32bdb07109f0a14387bd5220e4855b0638f0fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:06:02.113930 2159335 start.go:364] duration metric: took 51.06µs to acquireMachinesLock for "old-k8s-version-023742"
	I0314 01:06:02.113951 2159335 start.go:96] Skipping create...Using existing machine configuration
	I0314 01:06:02.113957 2159335 fix.go:54] fixHost starting: 
	I0314 01:06:02.114230 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:02.134769 2159335 fix.go:112] recreateIfNeeded on old-k8s-version-023742: state=Stopped err=<nil>
	W0314 01:06:02.134796 2159335 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 01:06:02.137006 2159335 out.go:177] * Restarting existing docker container for "old-k8s-version-023742" ...
	I0314 01:06:02.138792 2159335 cli_runner.go:164] Run: docker start old-k8s-version-023742
	I0314 01:06:02.503330 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:02.545218 2159335 kic.go:430] container "old-k8s-version-023742" state is running.
	I0314 01:06:02.545696 2159335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023742
	I0314 01:06:02.596134 2159335 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/config.json ...
	I0314 01:06:02.596388 2159335 machine.go:94] provisionDockerMachine start ...
	I0314 01:06:02.596469 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:02.622109 2159335 main.go:141] libmachine: Using SSH client type: native
	I0314 01:06:02.622392 2159335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35336 <nil> <nil>}
	I0314 01:06:02.622408 2159335 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 01:06:02.623140 2159335 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0314 01:06:05.770837 2159335 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023742
	
	I0314 01:06:05.770908 2159335 ubuntu.go:169] provisioning hostname "old-k8s-version-023742"
	I0314 01:06:05.771027 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:05.796580 2159335 main.go:141] libmachine: Using SSH client type: native
	I0314 01:06:05.796934 2159335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35336 <nil> <nil>}
	I0314 01:06:05.796954 2159335 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-023742 && echo "old-k8s-version-023742" | sudo tee /etc/hostname
	I0314 01:06:05.963722 2159335 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-023742
	
	I0314 01:06:05.963955 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:05.993625 2159335 main.go:141] libmachine: Using SSH client type: native
	I0314 01:06:05.993895 2159335 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35336 <nil> <nil>}
	I0314 01:06:05.993919 2159335 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-023742' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-023742/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-023742' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 01:06:06.155508 2159335 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:06:06.155537 2159335 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18375-1958430/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-1958430/.minikube}
	I0314 01:06:06.155586 2159335 ubuntu.go:177] setting up certificates
	I0314 01:06:06.155597 2159335 provision.go:84] configureAuth start
	I0314 01:06:06.155675 2159335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023742
	I0314 01:06:06.184846 2159335 provision.go:143] copyHostCerts
	I0314 01:06:06.184916 2159335 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem, removing ...
	I0314 01:06:06.184936 2159335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem
	I0314 01:06:06.185025 2159335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem (1078 bytes)
	I0314 01:06:06.185169 2159335 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem, removing ...
	I0314 01:06:06.185182 2159335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem
	I0314 01:06:06.185212 2159335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem (1123 bytes)
	I0314 01:06:06.185277 2159335 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem, removing ...
	I0314 01:06:06.185287 2159335 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem
	I0314 01:06:06.185312 2159335 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem (1675 bytes)
	I0314 01:06:06.185364 2159335 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-023742 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-023742]
	I0314 01:06:06.619178 2159335 provision.go:177] copyRemoteCerts
	I0314 01:06:06.619271 2159335 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 01:06:06.619320 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:06.636091 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:06.745299 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 01:06:06.784805 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 01:06:06.822728 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 01:06:06.849780 2159335 provision.go:87] duration metric: took 694.165925ms to configureAuth
	I0314 01:06:06.849809 2159335 ubuntu.go:193] setting minikube options for container-runtime
	I0314 01:06:06.850005 2159335 config.go:182] Loaded profile config "old-k8s-version-023742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0314 01:06:06.850019 2159335 machine.go:97] duration metric: took 4.253619972s to provisionDockerMachine
	I0314 01:06:06.850029 2159335 start.go:293] postStartSetup for "old-k8s-version-023742" (driver="docker")
	I0314 01:06:06.850042 2159335 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 01:06:06.850102 2159335 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 01:06:06.850145 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:06.869198 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:06.972253 2159335 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 01:06:06.977455 2159335 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 01:06:06.977496 2159335 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 01:06:06.977507 2159335 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 01:06:06.977514 2159335 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 01:06:06.977524 2159335 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/addons for local assets ...
	I0314 01:06:06.977585 2159335 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/files for local assets ...
	I0314 01:06:06.977677 2159335 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem -> 19638972.pem in /etc/ssl/certs
	I0314 01:06:06.977820 2159335 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 01:06:06.989836 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem --> /etc/ssl/certs/19638972.pem (1708 bytes)
	I0314 01:06:07.021272 2159335 start.go:296] duration metric: took 171.224297ms for postStartSetup
	I0314 01:06:07.021433 2159335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 01:06:07.021516 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:07.039796 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:07.136429 2159335 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 01:06:07.141415 2159335 fix.go:56] duration metric: took 5.027450649s for fixHost
	I0314 01:06:07.141436 2159335 start.go:83] releasing machines lock for "old-k8s-version-023742", held for 5.027497106s
	I0314 01:06:07.141510 2159335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-023742
	I0314 01:06:07.162841 2159335 ssh_runner.go:195] Run: cat /version.json
	I0314 01:06:07.162897 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:07.163141 2159335 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 01:06:07.163193 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:07.195084 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:07.199178 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:07.413595 2159335 ssh_runner.go:195] Run: systemctl --version
	I0314 01:06:07.418364 2159335 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 01:06:07.422731 2159335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 01:06:07.439904 2159335 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 01:06:07.440034 2159335 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 01:06:07.449059 2159335 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 01:06:07.449143 2159335 start.go:494] detecting cgroup driver to use...
	I0314 01:06:07.449205 2159335 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 01:06:07.449273 2159335 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 01:06:07.464009 2159335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 01:06:07.478861 2159335 docker.go:217] disabling cri-docker service (if available) ...
	I0314 01:06:07.478977 2159335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 01:06:07.493206 2159335 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 01:06:07.505967 2159335 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 01:06:07.643007 2159335 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 01:06:07.759030 2159335 docker.go:233] disabling docker service ...
	I0314 01:06:07.759173 2159335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 01:06:07.773103 2159335 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 01:06:07.787376 2159335 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 01:06:07.896966 2159335 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 01:06:08.008490 2159335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 01:06:08.022965 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 01:06:08.041442 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0314 01:06:08.052500 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 01:06:08.063596 2159335 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 01:06:08.063726 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 01:06:08.074757 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 01:06:08.086029 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 01:06:08.096912 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 01:06:08.107435 2159335 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 01:06:08.117172 2159335 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 01:06:08.127790 2159335 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 01:06:08.137654 2159335 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 01:06:08.147501 2159335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:06:08.260026 2159335 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 01:06:08.451885 2159335 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0314 01:06:08.451964 2159335 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0314 01:06:08.456135 2159335 start.go:562] Will wait 60s for crictl version
	I0314 01:06:08.456216 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:06:08.459908 2159335 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 01:06:08.550727 2159335 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0314 01:06:08.550850 2159335 ssh_runner.go:195] Run: containerd --version
	I0314 01:06:08.593090 2159335 ssh_runner.go:195] Run: containerd --version
	I0314 01:06:08.622171 2159335 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0314 01:06:08.624066 2159335 cli_runner.go:164] Run: docker network inspect old-k8s-version-023742 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 01:06:08.646920 2159335 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0314 01:06:08.651041 2159335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:06:08.664009 2159335 kubeadm.go:877] updating cluster {Name:old-k8s-version-023742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-023742 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 01:06:08.664142 2159335 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 01:06:08.664198 2159335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:06:08.711683 2159335 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 01:06:08.711704 2159335 containerd.go:519] Images already preloaded, skipping extraction
	I0314 01:06:08.711775 2159335 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:06:08.756644 2159335 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 01:06:08.756712 2159335 cache_images.go:84] Images are preloaded, skipping loading
	I0314 01:06:08.756734 2159335 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0314 01:06:08.756889 2159335 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-023742 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-023742 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 01:06:08.756980 2159335 ssh_runner.go:195] Run: sudo crictl info
	I0314 01:06:08.802438 2159335 cni.go:84] Creating CNI manager for ""
	I0314 01:06:08.802459 2159335 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 01:06:08.802469 2159335 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 01:06:08.802490 2159335 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-023742 NodeName:old-k8s-version-023742 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 01:06:08.802615 2159335 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-023742"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 01:06:08.802676 2159335 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 01:06:08.811650 2159335 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 01:06:08.811790 2159335 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 01:06:08.820275 2159335 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0314 01:06:08.838393 2159335 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 01:06:08.857961 2159335 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0314 01:06:08.876233 2159335 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0314 01:06:08.880175 2159335 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:06:08.890763 2159335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:06:09.012532 2159335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:06:09.032945 2159335 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742 for IP: 192.168.76.2
	I0314 01:06:09.033013 2159335 certs.go:194] generating shared ca certs ...
	I0314 01:06:09.033043 2159335 certs.go:226] acquiring lock for ca certs: {Name:mka77573162012513ec65b9398fcff30bed9742a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:06:09.033244 2159335 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key
	I0314 01:06:09.033335 2159335 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key
	I0314 01:06:09.033362 2159335 certs.go:256] generating profile certs ...
	I0314 01:06:09.033502 2159335 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.key
	I0314 01:06:09.033612 2159335 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/apiserver.key.b4e86bd0
	I0314 01:06:09.033693 2159335 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/proxy-client.key
	I0314 01:06:09.033866 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897.pem (1338 bytes)
	W0314 01:06:09.033927 2159335 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897_empty.pem, impossibly tiny 0 bytes
	I0314 01:06:09.033965 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 01:06:09.034018 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem (1078 bytes)
	I0314 01:06:09.034076 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem (1123 bytes)
	I0314 01:06:09.034139 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem (1675 bytes)
	I0314 01:06:09.034234 2159335 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem (1708 bytes)
	I0314 01:06:09.035404 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 01:06:09.062755 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 01:06:09.096498 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 01:06:09.133713 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 01:06:09.189741 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 01:06:09.253391 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 01:06:09.306263 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 01:06:09.336928 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 01:06:09.361555 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897.pem --> /usr/share/ca-certificates/1963897.pem (1338 bytes)
	I0314 01:06:09.387827 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem --> /usr/share/ca-certificates/19638972.pem (1708 bytes)
	I0314 01:06:09.412126 2159335 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 01:06:09.436520 2159335 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 01:06:09.454737 2159335 ssh_runner.go:195] Run: openssl version
	I0314 01:06:09.460737 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 01:06:09.470310 2159335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:06:09.474358 2159335 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:06:09.474421 2159335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:06:09.481750 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 01:06:09.490826 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1963897.pem && ln -fs /usr/share/ca-certificates/1963897.pem /etc/ssl/certs/1963897.pem"
	I0314 01:06:09.500228 2159335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1963897.pem
	I0314 01:06:09.504319 2159335 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 00:26 /usr/share/ca-certificates/1963897.pem
	I0314 01:06:09.504385 2159335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1963897.pem
	I0314 01:06:09.512048 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1963897.pem /etc/ssl/certs/51391683.0"
	I0314 01:06:09.521607 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19638972.pem && ln -fs /usr/share/ca-certificates/19638972.pem /etc/ssl/certs/19638972.pem"
	I0314 01:06:09.531032 2159335 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19638972.pem
	I0314 01:06:09.534963 2159335 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 00:26 /usr/share/ca-certificates/19638972.pem
	I0314 01:06:09.535039 2159335 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19638972.pem
	I0314 01:06:09.542589 2159335 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19638972.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 01:06:09.552196 2159335 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 01:06:09.556213 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 01:06:09.563311 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 01:06:09.570595 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 01:06:09.577698 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 01:06:09.584825 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 01:06:09.591954 2159335 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 01:06:09.599065 2159335 kubeadm.go:391] StartCluster: {Name:old-k8s-version-023742 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-023742 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:06:09.599177 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0314 01:06:09.599248 2159335 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 01:06:09.666867 2159335 cri.go:89] found id: "7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c"
	I0314 01:06:09.666890 2159335 cri.go:89] found id: "82285fe9aaaa74c2e22e01bd81b5181d6716037e73b00d0bc15879938824a80d"
	I0314 01:06:09.666895 2159335 cri.go:89] found id: "36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202"
	I0314 01:06:09.666899 2159335 cri.go:89] found id: "251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650"
	I0314 01:06:09.666903 2159335 cri.go:89] found id: "1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1"
	I0314 01:06:09.666907 2159335 cri.go:89] found id: "cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7"
	I0314 01:06:09.666910 2159335 cri.go:89] found id: "89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c"
	I0314 01:06:09.666913 2159335 cri.go:89] found id: "00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b"
	I0314 01:06:09.666916 2159335 cri.go:89] found id: ""
	I0314 01:06:09.666970 2159335 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0314 01:06:09.679915 2159335 cri.go:116] JSON = null
	W0314 01:06:09.679959 2159335 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0314 01:06:09.680023 2159335 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 01:06:09.689554 2159335 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 01:06:09.689574 2159335 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 01:06:09.689579 2159335 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 01:06:09.689630 2159335 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 01:06:09.698273 2159335 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 01:06:09.698689 2159335 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-023742" does not appear in /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:06:09.698790 2159335 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-1958430/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-023742" cluster setting kubeconfig missing "old-k8s-version-023742" context setting]
	I0314 01:06:09.699077 2159335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/kubeconfig: {Name:mkdddca847fdd161b32ac7434f6b37d491dbdecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:06:09.700469 2159335 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 01:06:09.709101 2159335 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0314 01:06:09.709143 2159335 kubeadm.go:591] duration metric: took 19.558717ms to restartPrimaryControlPlane
	I0314 01:06:09.709153 2159335 kubeadm.go:393] duration metric: took 110.096978ms to StartCluster
	I0314 01:06:09.709168 2159335 settings.go:142] acquiring lock: {Name:mkb041dc79ae1947b27d39dd7ebbd3bd473ee07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:06:09.709251 2159335 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:06:09.709941 2159335 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/kubeconfig: {Name:mkdddca847fdd161b32ac7434f6b37d491dbdecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:06:09.710388 2159335 config.go:182] Loaded profile config "old-k8s-version-023742": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0314 01:06:09.710440 2159335 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 01:06:09.714818 2159335 out.go:177] * Verifying Kubernetes components...
	I0314 01:06:09.710501 2159335 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 01:06:09.716806 2159335 addons.go:69] Setting dashboard=true in profile "old-k8s-version-023742"
	I0314 01:06:09.716864 2159335 addons.go:234] Setting addon dashboard=true in "old-k8s-version-023742"
	W0314 01:06:09.716878 2159335 addons.go:243] addon dashboard should already be in state true
	I0314 01:06:09.716933 2159335 host.go:66] Checking if "old-k8s-version-023742" exists ...
	I0314 01:06:09.717425 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:09.717598 2159335 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-023742"
	I0314 01:06:09.717642 2159335 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-023742"
	W0314 01:06:09.717655 2159335 addons.go:243] addon storage-provisioner should already be in state true
	I0314 01:06:09.717678 2159335 host.go:66] Checking if "old-k8s-version-023742" exists ...
	I0314 01:06:09.718084 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:09.720499 2159335 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-023742"
	I0314 01:06:09.720538 2159335 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-023742"
	I0314 01:06:09.720815 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:09.735747 2159335 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:06:09.735913 2159335 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-023742"
	I0314 01:06:09.735961 2159335 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-023742"
	W0314 01:06:09.735969 2159335 addons.go:243] addon metrics-server should already be in state true
	I0314 01:06:09.736001 2159335 host.go:66] Checking if "old-k8s-version-023742" exists ...
	I0314 01:06:09.736446 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:09.798026 2159335 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 01:06:09.800206 2159335 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0314 01:06:09.803424 2159335 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0314 01:06:09.800364 2159335 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:06:09.803371 2159335 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-023742"
	W0314 01:06:09.807273 2159335 addons.go:243] addon default-storageclass should already be in state true
	I0314 01:06:09.807313 2159335 host.go:66] Checking if "old-k8s-version-023742" exists ...
	I0314 01:06:09.807749 2159335 cli_runner.go:164] Run: docker container inspect old-k8s-version-023742 --format={{.State.Status}}
	I0314 01:06:09.807924 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0314 01:06:09.807942 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0314 01:06:09.807989 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:09.808338 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 01:06:09.808400 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:09.815400 2159335 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 01:06:09.817183 2159335 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 01:06:09.817205 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 01:06:09.817278 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:09.861943 2159335 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 01:06:09.861964 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 01:06:09.862023 2159335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-023742
	I0314 01:06:09.917688 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:09.931314 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:09.933198 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:09.933604 2159335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35336 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/old-k8s-version-023742/id_rsa Username:docker}
	I0314 01:06:09.966200 2159335 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:06:10.022781 2159335 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-023742" to be "Ready" ...
	I0314 01:06:10.119848 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:06:10.164504 2159335 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 01:06:10.164567 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 01:06:10.186957 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0314 01:06:10.187028 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0314 01:06:10.218382 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:06:10.254889 2159335 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 01:06:10.254966 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 01:06:10.270540 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0314 01:06:10.270615 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0314 01:06:10.368242 2159335 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:06:10.368336 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 01:06:10.372336 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0314 01:06:10.372409 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0314 01:06:10.444459 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.444572 2159335 retry.go:31] will retry after 365.310323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.461345 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0314 01:06:10.461414 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0314 01:06:10.470111 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0314 01:06:10.479741 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.479819 2159335 retry.go:31] will retry after 228.204007ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.501775 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0314 01:06:10.501849 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0314 01:06:10.571144 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0314 01:06:10.571172 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0314 01:06:10.606577 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.606654 2159335 retry.go:31] will retry after 195.742533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.618720 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0314 01:06:10.618782 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0314 01:06:10.638529 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0314 01:06:10.638597 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0314 01:06:10.663907 2159335 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:06:10.663976 2159335 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0314 01:06:10.688671 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:06:10.708844 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:06:10.803158 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:06:10.810495 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0314 01:06:10.847048 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.847122 2159335 retry.go:31] will retry after 247.457439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0314 01:06:10.966546 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:10.966619 2159335 retry.go:31] will retry after 209.400234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.095482 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0314 01:06:11.149231 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.149374 2159335 retry.go:31] will retry after 268.525349ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0314 01:06:11.149325 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.149421 2159335 retry.go:31] will retry after 503.838277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.176587 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0314 01:06:11.243436 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.243521 2159335 retry.go:31] will retry after 430.94327ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0314 01:06:11.324595 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.324628 2159335 retry.go:31] will retry after 578.367852ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.418832 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0314 01:06:11.509357 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.509405 2159335 retry.go:31] will retry after 374.366956ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.654250 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:06:11.675623 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0314 01:06:11.774565 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.774601 2159335 retry.go:31] will retry after 576.49582ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0314 01:06:11.843659 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.843743 2159335 retry.go:31] will retry after 420.242289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.883998 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:06:11.903393 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0314 01:06:11.999534 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:11.999644 2159335 retry.go:31] will retry after 1.002966786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.024197 2159335 node_ready.go:53] error getting node "old-k8s-version-023742": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-023742": dial tcp 192.168.76.2:8443: connect: connection refused
	W0314 01:06:12.096444 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.096531 2159335 retry.go:31] will retry after 708.245145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.265027 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:06:12.351648 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0314 01:06:12.363024 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.363055 2159335 retry.go:31] will retry after 1.074678109s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0314 01:06:12.457564 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.457597 2159335 retry.go:31] will retry after 806.717468ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.805630 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0314 01:06:12.906935 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:12.907015 2159335 retry.go:31] will retry after 1.049363262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.003381 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0314 01:06:13.105445 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.105527 2159335 retry.go:31] will retry after 1.705682476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.264970 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0314 01:06:13.365381 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.365428 2159335 retry.go:31] will retry after 1.620220017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.438742 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0314 01:06:13.538295 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.538336 2159335 retry.go:31] will retry after 1.072906864s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:13.956658 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0314 01:06:14.060795 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.060848 2159335 retry.go:31] will retry after 1.100901061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.523404 2159335 node_ready.go:53] error getting node "old-k8s-version-023742": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-023742": dial tcp 192.168.76.2:8443: connect: connection refused
	I0314 01:06:14.611795 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0314 01:06:14.726825 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.726867 2159335 retry.go:31] will retry after 1.374191141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.812220 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0314 01:06:14.919717 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.919757 2159335 retry.go:31] will retry after 2.193380328s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:14.986029 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0314 01:06:15.093939 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:15.094028 2159335 retry.go:31] will retry after 2.348310783s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:15.162173 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0314 01:06:15.276353 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:15.276389 2159335 retry.go:31] will retry after 3.780306152s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:16.102162 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0314 01:06:16.251509 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:16.251541 2159335 retry.go:31] will retry after 3.238099069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:17.023397 2159335 node_ready.go:53] error getting node "old-k8s-version-023742": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-023742": dial tcp 192.168.76.2:8443: connect: connection refused
	I0314 01:06:17.113781 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0314 01:06:17.252979 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:17.253008 2159335 retry.go:31] will retry after 3.983311784s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:17.443340 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0314 01:06:17.575323 2159335 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:17.575351 2159335 retry.go:31] will retry after 1.910133483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0314 01:06:19.057446 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:06:19.486159 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:06:19.490457 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:06:21.236889 2159335 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:06:28.953071 2159335 node_ready.go:49] node "old-k8s-version-023742" has status "Ready":"True"
	I0314 01:06:28.953095 2159335 node_ready.go:38] duration metric: took 18.930249632s for node "old-k8s-version-023742" to be "Ready" ...
	I0314 01:06:28.953107 2159335 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:06:29.264257 2159335 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-m88gl" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:29.475817 2159335 pod_ready.go:92] pod "coredns-74ff55c5b-m88gl" in "kube-system" namespace has status "Ready":"True"
	I0314 01:06:29.475903 2159335 pod_ready.go:81] duration metric: took 211.542365ms for pod "coredns-74ff55c5b-m88gl" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:29.475930 2159335 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:29.574411 2159335 pod_ready.go:92] pod "etcd-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:06:29.574487 2159335 pod_ready.go:81] duration metric: took 98.535422ms for pod "etcd-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:29.574517 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:30.624782 2159335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.567292978s)
	I0314 01:06:30.946466 2159335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.460262115s)
	I0314 01:06:30.946550 2159335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.70962901s)
	I0314 01:06:30.946570 2159335 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-023742"
	I0314 01:06:30.946663 2159335 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.456172942s)
	I0314 01:06:30.949240 2159335 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-023742 addons enable metrics-server
	
	I0314 01:06:30.952160 2159335 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0314 01:06:30.954841 2159335 addons.go:505] duration metric: took 21.244334186s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0314 01:06:31.580330 2159335 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:33.584135 2159335 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:36.081989 2159335 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:38.581474 2159335 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:06:38.581498 2159335 pod_ready.go:81] duration metric: took 9.006960446s for pod "kube-apiserver-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:38.581510 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:06:40.588608 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:42.589904 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:45.092280 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:47.598419 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:50.094309 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:52.591843 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:55.091817 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:57.102984 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:06:59.120863 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:01.587841 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:03.588519 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:06.088288 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:08.092006 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:10.588052 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:12.589484 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:15.089593 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:17.588276 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:19.589318 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:22.088460 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:24.588300 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:26.589260 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:29.089477 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:31.587727 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:33.592793 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:36.088446 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:38.588291 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:41.087906 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:43.088400 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:44.088467 2159335 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:44.088496 2159335 pod_ready.go:81] duration metric: took 1m5.506978254s for pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.088508 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vm5pd" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.093958 2159335 pod_ready.go:92] pod "kube-proxy-vm5pd" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:44.093985 2159335 pod_ready.go:81] duration metric: took 5.469145ms for pod "kube-proxy-vm5pd" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.093997 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:46.100906 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:48.601389 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:51.099734 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:53.101296 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:55.101787 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:57.600299 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:59.106087 2159335 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:59.106115 2159335 pod_ready.go:81] duration metric: took 15.012109393s for pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:59.106127 2159335 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace to be "Ready" ...
	I0314 01:08:01.112043 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:03.113156 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:05.613131 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:08.113014 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:10.113211 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:12.614105 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:15.113089 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:17.614005 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:20.113930 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:22.613294 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:25.112900 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:27.612495 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:30.114205 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:32.612283 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:34.612593 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:36.613623 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:39.113089 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:41.612451 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:43.613091 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:46.116130 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:48.612362 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:50.612447 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:52.612689 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:55.113105 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:57.612018 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:59.612746 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:02.113430 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:04.612370 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:07.113178 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:09.612196 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:12.113067 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:14.612742 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:17.112547 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:19.612698 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:21.612869 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:23.617303 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:26.113238 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:28.114666 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:30.612076 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:32.612416 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:34.612869 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:36.613188 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:38.613854 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:40.680861 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:43.112789 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:45.128449 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:47.611451 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:49.615256 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:52.112495 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:54.612531 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:56.612712 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:58.616059 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:01.113394 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:03.612567 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:05.612888 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:08.112406 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:10.113435 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:12.114104 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:14.613004 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:17.112735 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:19.613008 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:22.121318 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:24.612298 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:26.613054 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:29.112840 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:31.163820 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:33.611968 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:35.612071 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:38.112892 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:40.113241 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:42.116762 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:44.612296 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:47.112173 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:49.114394 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:51.611692 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:53.612772 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:56.112956 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:58.612462 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:01.113085 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:03.612572 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:06.112431 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:08.612641 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:11.112936 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:13.613871 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:16.112520 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:18.113134 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:20.114090 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:22.611978 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:24.618647 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:27.113224 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:29.611941 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:32.111941 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:34.114194 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:36.611543 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:38.624690 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:41.112194 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:43.112729 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:45.125051 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:47.612575 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:49.613777 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:52.112760 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:54.613992 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:57.113242 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:59.112719 2159335 pod_ready.go:81] duration metric: took 4m0.006577259s for pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace to be "Ready" ...
	E0314 01:11:59.112744 2159335 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:11:59.112753 2159335 pod_ready.go:38] duration metric: took 5m30.159634759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:11:59.112778 2159335 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:11:59.112807 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:11:59.112872 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:11:59.158140 2159335 cri.go:89] found id: "fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7"
	I0314 01:11:59.158161 2159335 cri.go:89] found id: "cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7"
	I0314 01:11:59.158165 2159335 cri.go:89] found id: ""
	I0314 01:11:59.158172 2159335 logs.go:276] 2 containers: [fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7 cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7]
	I0314 01:11:59.158245 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.161848 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.165307 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0314 01:11:59.165397 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:11:59.203039 2159335 cri.go:89] found id: "262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f"
	I0314 01:11:59.203063 2159335 cri.go:89] found id: "1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1"
	I0314 01:11:59.203067 2159335 cri.go:89] found id: ""
	I0314 01:11:59.203075 2159335 logs.go:276] 2 containers: [262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f 1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1]
	I0314 01:11:59.203161 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.206505 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.209749 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0314 01:11:59.209832 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:11:59.249575 2159335 cri.go:89] found id: "577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1"
	I0314 01:11:59.249595 2159335 cri.go:89] found id: "7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c"
	I0314 01:11:59.249600 2159335 cri.go:89] found id: ""
	I0314 01:11:59.249607 2159335 logs.go:276] 2 containers: [577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1 7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c]
	I0314 01:11:59.249661 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.253180 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.256325 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:11:59.256395 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:11:59.293600 2159335 cri.go:89] found id: "abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704"
	I0314 01:11:59.293620 2159335 cri.go:89] found id: "89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c"
	I0314 01:11:59.293625 2159335 cri.go:89] found id: ""
	I0314 01:11:59.293632 2159335 logs.go:276] 2 containers: [abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704 89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c]
	I0314 01:11:59.293691 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.297072 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.300603 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:11:59.300711 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:11:59.343506 2159335 cri.go:89] found id: "a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c"
	I0314 01:11:59.343573 2159335 cri.go:89] found id: "36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202"
	I0314 01:11:59.343585 2159335 cri.go:89] found id: ""
	I0314 01:11:59.343593 2159335 logs.go:276] 2 containers: [a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c 36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202]
	I0314 01:11:59.343650 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.347304 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.350557 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:11:59.350629 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:11:59.391928 2159335 cri.go:89] found id: "c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272"
	I0314 01:11:59.391991 2159335 cri.go:89] found id: "00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b"
	I0314 01:11:59.392003 2159335 cri.go:89] found id: ""
	I0314 01:11:59.392011 2159335 logs.go:276] 2 containers: [c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272 00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b]
	I0314 01:11:59.392078 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.395688 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.399092 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0314 01:11:59.399186 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:11:59.437453 2159335 cri.go:89] found id: "661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702"
	I0314 01:11:59.437476 2159335 cri.go:89] found id: "251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650"
	I0314 01:11:59.437481 2159335 cri.go:89] found id: ""
	I0314 01:11:59.437488 2159335 logs.go:276] 2 containers: [661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702 251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650]
	I0314 01:11:59.437547 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.441203 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.444733 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:11:59.444868 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:11:59.483655 2159335 cri.go:89] found id: "c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4"
	I0314 01:11:59.483715 2159335 cri.go:89] found id: ""
	I0314 01:11:59.483747 2159335 logs.go:276] 1 containers: [c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4]
	I0314 01:11:59.483834 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.487513 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:11:59.487586 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:11:59.527664 2159335 cri.go:89] found id: "0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416"
	I0314 01:11:59.527685 2159335 cri.go:89] found id: "4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c"
	I0314 01:11:59.527690 2159335 cri.go:89] found id: ""
	I0314 01:11:59.527697 2159335 logs.go:276] 2 containers: [0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416 4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c]
	I0314 01:11:59.527760 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.531427 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.534848 2159335 logs.go:123] Gathering logs for coredns [7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c] ...
	I0314 01:11:59.534873 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c"
	I0314 01:11:59.580402 2159335 logs.go:123] Gathering logs for kube-proxy [a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c] ...
	I0314 01:11:59.580430 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c"
	I0314 01:11:59.621026 2159335 logs.go:123] Gathering logs for kindnet [251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650] ...
	I0314 01:11:59.621056 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650"
	I0314 01:11:59.659038 2159335 logs.go:123] Gathering logs for kubernetes-dashboard [c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4] ...
	I0314 01:11:59.659065 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4"
	I0314 01:11:59.701741 2159335 logs.go:123] Gathering logs for containerd ...
	I0314 01:11:59.701769 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0314 01:11:59.765300 2159335 logs.go:123] Gathering logs for dmesg ...
	I0314 01:11:59.765335 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:11:59.790417 2159335 logs.go:123] Gathering logs for etcd [262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f] ...
	I0314 01:11:59.790445 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f"
	I0314 01:11:59.843121 2159335 logs.go:123] Gathering logs for coredns [577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1] ...
	I0314 01:11:59.843149 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1"
	I0314 01:11:59.902029 2159335 logs.go:123] Gathering logs for container status ...
	I0314 01:11:59.902056 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:11:59.992718 2159335 logs.go:123] Gathering logs for kubelet ...
	I0314 01:11:59.992749 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0314 01:12:00.060282 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.974732     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.060539 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.974970     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.060774 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975144     662 reflector.go:138] object-"kube-system"/"coredns-token-25l2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-25l2w" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061038 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975368     662 reflector.go:138] object-"kube-system"/"kindnet-token-5q9tx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5q9tx" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061280 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975944     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcjsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcjsf" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061494 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.979394     662 reflector.go:138] object-"default"/"default-token-rpt2n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rpt2n" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061719 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.980058     662 reflector.go:138] object-"kube-system"/"metrics-server-token-zxqtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxqtp" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061941 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.980106     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-g2rnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-g2rnq" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.070827 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:31 old-k8s-version-023742 kubelet[662]: E0314 01:06:31.370582     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.073041 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:32 old-k8s-version-023742 kubelet[662]: E0314 01:06:32.031743     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.075814 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:46 old-k8s-version-023742 kubelet[662]: E0314 01:06:46.773820     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.076572 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:48 old-k8s-version-023742 kubelet[662]: E0314 01:06:48.034705     662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-wht5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-wht5d" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.078439 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:59 old-k8s-version-023742 kubelet[662]: E0314 01:06:59.151608     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.078625 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:59 old-k8s-version-023742 kubelet[662]: E0314 01:06:59.766765     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.078953 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:00 old-k8s-version-023742 kubelet[662]: E0314 01:07:00.185170     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.079500 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:02 old-k8s-version-023742 kubelet[662]: E0314 01:07:02.195749     662 pod_workers.go:191] Error syncing pod 57373aa5-20c2-4873-b3e0-c1c27570f447 ("storage-provisioner_kube-system(57373aa5-20c2-4873-b3e0-c1c27570f447)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57373aa5-20c2-4873-b3e0-c1c27570f447)"
	W0314 01:12:00.079839 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:03 old-k8s-version-023742 kubelet[662]: E0314 01:07:03.347757     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.082701 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:14 old-k8s-version-023742 kubelet[662]: E0314 01:07:14.815543     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.083162 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:15 old-k8s-version-023742 kubelet[662]: E0314 01:07:15.222109     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.083775 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:23 old-k8s-version-023742 kubelet[662]: E0314 01:07:23.347695     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.083984 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:27 old-k8s-version-023742 kubelet[662]: E0314 01:07:27.771127     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.084571 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:36 old-k8s-version-023742 kubelet[662]: E0314 01:07:36.268490     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.084759 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:38 old-k8s-version-023742 kubelet[662]: E0314 01:07:38.766381     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.085087 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:43 old-k8s-version-023742 kubelet[662]: E0314 01:07:43.347263     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.085270 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:52 old-k8s-version-023742 kubelet[662]: E0314 01:07:52.766485     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.085601 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:56 old-k8s-version-023742 kubelet[662]: E0314 01:07:56.766099     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.085925 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:07 old-k8s-version-023742 kubelet[662]: E0314 01:08:07.770729     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.088552 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:07 old-k8s-version-023742 kubelet[662]: E0314 01:08:07.790467     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.089148 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:20 old-k8s-version-023742 kubelet[662]: E0314 01:08:20.365151     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.089334 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:20 old-k8s-version-023742 kubelet[662]: E0314 01:08:20.769752     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.089658 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:23 old-k8s-version-023742 kubelet[662]: E0314 01:08:23.347539     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.089844 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:32 old-k8s-version-023742 kubelet[662]: E0314 01:08:32.767876     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.090188 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:35 old-k8s-version-023742 kubelet[662]: E0314 01:08:35.771384     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.090369 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:43 old-k8s-version-023742 kubelet[662]: E0314 01:08:43.766453     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.090695 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:49 old-k8s-version-023742 kubelet[662]: E0314 01:08:49.766428     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.090874 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:58 old-k8s-version-023742 kubelet[662]: E0314 01:08:58.766502     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.091194 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:02 old-k8s-version-023742 kubelet[662]: E0314 01:09:02.765978     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.091387 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:12 old-k8s-version-023742 kubelet[662]: E0314 01:09:12.766341     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.091707 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:16 old-k8s-version-023742 kubelet[662]: E0314 01:09:16.766009     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.091896 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:25 old-k8s-version-023742 kubelet[662]: E0314 01:09:25.766322     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.092219 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:29 old-k8s-version-023742 kubelet[662]: E0314 01:09:29.766880     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.094659 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:36 old-k8s-version-023742 kubelet[662]: E0314 01:09:36.773716     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.095254 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:41 old-k8s-version-023742 kubelet[662]: E0314 01:09:41.548782     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.095595 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:43 old-k8s-version-023742 kubelet[662]: E0314 01:09:43.347327     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.095785 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:50 old-k8s-version-023742 kubelet[662]: E0314 01:09:50.766966     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.096113 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:56 old-k8s-version-023742 kubelet[662]: E0314 01:09:56.766015     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.096296 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:04 old-k8s-version-023742 kubelet[662]: E0314 01:10:04.766379     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.096625 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:10 old-k8s-version-023742 kubelet[662]: E0314 01:10:10.766119     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.096809 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:15 old-k8s-version-023742 kubelet[662]: E0314 01:10:15.767425     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.097131 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:25 old-k8s-version-023742 kubelet[662]: E0314 01:10:25.766011     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.097313 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:29 old-k8s-version-023742 kubelet[662]: E0314 01:10:29.766425     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.097636 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:40 old-k8s-version-023742 kubelet[662]: E0314 01:10:40.766016     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.097817 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:43 old-k8s-version-023742 kubelet[662]: E0314 01:10:43.767405     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098144 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:54 old-k8s-version-023742 kubelet[662]: E0314 01:10:54.765962     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.098344 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:55 old-k8s-version-023742 kubelet[662]: E0314 01:10:55.766370     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098533 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:06 old-k8s-version-023742 kubelet[662]: E0314 01:11:06.766830     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098861 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:08 old-k8s-version-023742 kubelet[662]: E0314 01:11:08.765992     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.099043 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.766986     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.099479 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.771307     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.099671 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.766411     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.100002 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.100332 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.100526 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.100851 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.101036 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0314 01:12:00.101050 2159335 logs.go:123] Gathering logs for kube-proxy [36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202] ...
	I0314 01:12:00.101074 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202"
	I0314 01:12:00.174338 2159335 logs.go:123] Gathering logs for storage-provisioner [4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c] ...
	I0314 01:12:00.174402 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c"
	I0314 01:12:00.342882 2159335 logs.go:123] Gathering logs for kube-controller-manager [c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272] ...
	I0314 01:12:00.342916 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272"
	I0314 01:12:00.504401 2159335 logs.go:123] Gathering logs for kube-controller-manager [00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b] ...
	I0314 01:12:00.504488 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b"
	I0314 01:12:00.615545 2159335 logs.go:123] Gathering logs for kindnet [661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702] ...
	I0314 01:12:00.615587 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702"
	I0314 01:12:00.668756 2159335 logs.go:123] Gathering logs for storage-provisioner [0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416] ...
	I0314 01:12:00.668788 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416"
	I0314 01:12:00.713317 2159335 logs.go:123] Gathering logs for kube-apiserver [fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7] ...
	I0314 01:12:00.713344 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7"
	I0314 01:12:00.787839 2159335 logs.go:123] Gathering logs for kube-apiserver [cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7] ...
	I0314 01:12:00.787872 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7"
	I0314 01:12:00.855002 2159335 logs.go:123] Gathering logs for etcd [1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1] ...
	I0314 01:12:00.855046 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1"
	I0314 01:12:00.907768 2159335 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:12:00.907808 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:12:01.123021 2159335 logs.go:123] Gathering logs for kube-scheduler [abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704] ...
	I0314 01:12:01.123090 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704"
	I0314 01:12:01.210926 2159335 logs.go:123] Gathering logs for kube-scheduler [89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c] ...
	I0314 01:12:01.210954 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c"
	I0314 01:12:01.269850 2159335 out.go:304] Setting ErrFile to fd 2...
	I0314 01:12:01.270269 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0314 01:12:01.270366 2159335 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0314 01:12:01.270411 2159335 out.go:239]   Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	  Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270566 2159335 out.go:239]   Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	  Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270598 2159335 out.go:239]   Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:01.270630 2159335 out.go:239]   Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	  Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270666 2159335 out.go:239]   Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0314 01:12:01.270709 2159335 out.go:304] Setting ErrFile to fd 2...
	I0314 01:12:01.270732 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:12:11.271976 2159335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:12:11.284381 2159335 api_server.go:72] duration metric: took 6m1.5739088s to wait for apiserver process to appear ...
	I0314 01:12:11.284404 2159335 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:12:11.286614 2159335 out.go:177] 
	W0314 01:12:11.288783 2159335 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0314 01:12:11.288808 2159335 out.go:239] * 
	* 
	W0314 01:12:11.289707 2159335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:12:11.292264 2159335 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-023742 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-023742
helpers_test.go:235: (dbg) docker inspect old-k8s-version-023742:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd",
	        "Created": "2024-03-14T01:03:02.193822222Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2159530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-14T01:06:02.492594699Z",
	            "FinishedAt": "2024-03-14T01:06:01.222351113Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd/hosts",
	        "LogPath": "/var/lib/docker/containers/8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd/8a6adc8c6a3b6226636e672fe7c844b2396abb72552b50817cef9f2f95db6fcd-json.log",
	        "Name": "/old-k8s-version-023742",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-023742:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-023742",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/be0eff0a50b93012dc27285a987e503aedf0c14138b3a2356a695b78be795ebf-init/diff:/var/lib/docker/overlay2/72e8565c3c6c9dcaff9dab92d595dc2eb0a265ce93caf6066e88703bac9975f6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/be0eff0a50b93012dc27285a987e503aedf0c14138b3a2356a695b78be795ebf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/be0eff0a50b93012dc27285a987e503aedf0c14138b3a2356a695b78be795ebf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/be0eff0a50b93012dc27285a987e503aedf0c14138b3a2356a695b78be795ebf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-023742",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-023742/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-023742",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-023742",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-023742",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d96ca5a25f77e9a398cb32ae420c3a9fe73133b64e38d7bb9e5727de67f6bc78",
	            "SandboxKey": "/var/run/docker/netns/d96ca5a25f77",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35336"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35335"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35332"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35334"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35333"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-023742": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "8a6adc8c6a3b",
	                        "old-k8s-version-023742"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "5947c1200478aa26f99a492a5868230af15cf7ef83578a6127e0bb8034a9b4e4",
	                    "EndpointID": "6f130c18e293bcf1968779fadc61ecfdd96dd3a598905626000a7dd8e4557f81",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-023742",
	                        "8a6adc8c6a3b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023742 -n old-k8s-version-023742
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-023742 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-023742 logs -n 25: (2.479044874s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-355815 sudo find                             | cilium-355815             | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-355815 sudo crio                             | cilium-355815             | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-355815                                       | cilium-355815             | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC | 14 Mar 24 01:01 UTC |
	| start   | -p force-systemd-env-008648                            | force-systemd-env-008648  | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-804124                              | force-systemd-flag-804124 | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC | 14 Mar 24 01:01 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-804124                           | force-systemd-flag-804124 | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC | 14 Mar 24 01:01 UTC |
	| start   | -p cert-expiration-798126                              | cert-expiration-798126    | jenkins | v1.32.0 | 14 Mar 24 01:01 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-008648                               | force-systemd-env-008648  | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-008648                            | force-systemd-env-008648  | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	| start   | -p cert-options-449029                                 | cert-options-449029       | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-449029 ssh                                | cert-options-449029       | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-449029 -- sudo                         | cert-options-449029       | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-449029                                 | cert-options-449029       | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:02 UTC |
	| start   | -p old-k8s-version-023742                              | old-k8s-version-023742    | jenkins | v1.32.0 | 14 Mar 24 01:02 UTC | 14 Mar 24 01:05 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-798126                              | cert-expiration-798126    | jenkins | v1.32.0 | 14 Mar 24 01:05 UTC | 14 Mar 24 01:05 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-798126                              | cert-expiration-798126    | jenkins | v1.32.0 | 14 Mar 24 01:05 UTC | 14 Mar 24 01:05 UTC |
	| start   | -p no-preload-183952                                   | no-preload-183952         | jenkins | v1.32.0 | 14 Mar 24 01:05 UTC | 14 Mar 24 01:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-023742        | old-k8s-version-023742    | jenkins | v1.32.0 | 14 Mar 24 01:05 UTC | 14 Mar 24 01:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-023742                              | old-k8s-version-023742    | jenkins | v1.32.0 | 14 Mar 24 01:05 UTC | 14 Mar 24 01:06 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-023742             | old-k8s-version-023742    | jenkins | v1.32.0 | 14 Mar 24 01:06 UTC | 14 Mar 24 01:06 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-023742                              | old-k8s-version-023742    | jenkins | v1.32.0 | 14 Mar 24 01:06 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-183952             | no-preload-183952         | jenkins | v1.32.0 | 14 Mar 24 01:07 UTC | 14 Mar 24 01:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-183952                                   | no-preload-183952         | jenkins | v1.32.0 | 14 Mar 24 01:07 UTC | 14 Mar 24 01:07 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-183952                  | no-preload-183952         | jenkins | v1.32.0 | 14 Mar 24 01:07 UTC | 14 Mar 24 01:07 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-183952                                   | no-preload-183952         | jenkins | v1.32.0 | 14 Mar 24 01:07 UTC | 14 Mar 24 01:12 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 01:07:20
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 01:07:20.494226 2164567 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:07:20.494533 2164567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:07:20.494561 2164567 out.go:304] Setting ErrFile to fd 2...
	I0314 01:07:20.494580 2164567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:07:20.494862 2164567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 01:07:20.495326 2164567 out.go:298] Setting JSON to false
	I0314 01:07:20.496531 2164567 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31791,"bootTime":1710346650,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 01:07:20.496641 2164567 start.go:139] virtualization:  
	I0314 01:07:20.499717 2164567 out.go:177] * [no-preload-183952] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 01:07:20.501897 2164567 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:07:20.503741 2164567 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:07:20.501987 2164567 notify.go:220] Checking for updates...
	I0314 01:07:20.505563 2164567 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:07:20.507308 2164567 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 01:07:20.509472 2164567 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 01:07:20.511157 2164567 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:07:20.513465 2164567 config.go:182] Loaded profile config "no-preload-183952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0314 01:07:20.514041 2164567 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:07:20.538127 2164567 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 01:07:20.539280 2164567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 01:07:20.611615 2164567 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 01:07:20.601822751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 01:07:20.611726 2164567 docker.go:295] overlay module found
	I0314 01:07:20.614158 2164567 out.go:177] * Using the docker driver based on existing profile
	I0314 01:07:20.616481 2164567 start.go:297] selected driver: docker
	I0314 01:07:20.616500 2164567 start.go:901] validating driver "docker" against &{Name:no-preload-183952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-183952 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:07:20.616609 2164567 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:07:20.617242 2164567 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 01:07:20.680391 2164567 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 01:07:20.670315491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 01:07:20.680779 2164567 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:07:20.680836 2164567 cni.go:84] Creating CNI manager for ""
	I0314 01:07:20.680851 2164567 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 01:07:20.680890 2164567 start.go:340] cluster config:
	{Name:no-preload-183952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-183952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:07:20.684490 2164567 out.go:177] * Starting "no-preload-183952" primary control-plane node in "no-preload-183952" cluster
	I0314 01:07:20.686319 2164567 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 01:07:20.689561 2164567 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 01:07:20.691673 2164567 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 01:07:20.691760 2164567 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 01:07:20.691828 2164567 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/config.json ...
	I0314 01:07:20.692124 2164567 cache.go:107] acquiring lock: {Name:mk9076275c27269851883446f180b6c88e0e34b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692210 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0314 01:07:20.692224 2164567 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 106.453µs
	I0314 01:07:20.692236 2164567 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0314 01:07:20.692247 2164567 cache.go:107] acquiring lock: {Name:mk33701ed146320ecb5b030a5b8a235bd520bfa8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692283 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0314 01:07:20.692293 2164567 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 47.073µs
	I0314 01:07:20.692300 2164567 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0314 01:07:20.692316 2164567 cache.go:107] acquiring lock: {Name:mk103ce4979b80deb101d187f9b92f91885fbf1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692348 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0314 01:07:20.692358 2164567 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 42.929µs
	I0314 01:07:20.692365 2164567 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0314 01:07:20.692387 2164567 cache.go:107] acquiring lock: {Name:mkbd985a4e1662f9c76328f20a1ae9e6eb39610a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692420 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0314 01:07:20.692434 2164567 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 50.79µs
	I0314 01:07:20.692443 2164567 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0314 01:07:20.692453 2164567 cache.go:107] acquiring lock: {Name:mkb001bca41d226abe8506f20cba7913fddcb978 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692481 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0314 01:07:20.692490 2164567 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 39.073µs
	I0314 01:07:20.692497 2164567 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0314 01:07:20.692509 2164567 cache.go:107] acquiring lock: {Name:mk4f8cf35660d779ac271ad118033be8cca86330 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692539 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0314 01:07:20.692547 2164567 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 39.885µs
	I0314 01:07:20.692553 2164567 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0314 01:07:20.692562 2164567 cache.go:107] acquiring lock: {Name:mk67605906e0166e04ed33849590b5f204e4af14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692600 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0314 01:07:20.692609 2164567 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 48.081µs
	I0314 01:07:20.692615 2164567 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0314 01:07:20.692629 2164567 cache.go:107] acquiring lock: {Name:mk0a4f58c844215558398a5d5fba22e104bd8d7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.692668 2164567 cache.go:115] /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0314 01:07:20.692677 2164567 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 48.672µs
	I0314 01:07:20.692683 2164567 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0314 01:07:20.692689 2164567 cache.go:87] Successfully saved all images to host disk.
	I0314 01:07:20.709286 2164567 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0314 01:07:20.709311 2164567 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0314 01:07:20.709332 2164567 cache.go:194] Successfully downloaded all kic artifacts
	I0314 01:07:20.709368 2164567 start.go:360] acquireMachinesLock for no-preload-183952: {Name:mkdd7bf00e096cf6fdef968a43e851f795b4a633 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 01:07:20.709430 2164567 start.go:364] duration metric: took 44.776µs to acquireMachinesLock for "no-preload-183952"
	I0314 01:07:20.709450 2164567 start.go:96] Skipping create...Using existing machine configuration
	I0314 01:07:20.709456 2164567 fix.go:54] fixHost starting: 
	I0314 01:07:20.709743 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:20.726090 2164567 fix.go:112] recreateIfNeeded on no-preload-183952: state=Stopped err=<nil>
	W0314 01:07:20.726125 2164567 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 01:07:20.730422 2164567 out.go:177] * Restarting existing docker container for "no-preload-183952" ...
	I0314 01:07:17.588276 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:19.589318 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:20.732939 2164567 cli_runner.go:164] Run: docker start no-preload-183952
	I0314 01:07:21.053617 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:21.093994 2164567 kic.go:430] container "no-preload-183952" state is running.
	I0314 01:07:21.094398 2164567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-183952
	I0314 01:07:21.121583 2164567 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/config.json ...
	I0314 01:07:21.121950 2164567 machine.go:94] provisionDockerMachine start ...
	I0314 01:07:21.122140 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:21.145400 2164567 main.go:141] libmachine: Using SSH client type: native
	I0314 01:07:21.145664 2164567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35341 <nil> <nil>}
	I0314 01:07:21.145673 2164567 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 01:07:21.146568 2164567 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0314 01:07:24.286740 2164567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-183952
	
	I0314 01:07:24.286763 2164567 ubuntu.go:169] provisioning hostname "no-preload-183952"
	I0314 01:07:24.286837 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:24.303920 2164567 main.go:141] libmachine: Using SSH client type: native
	I0314 01:07:24.304183 2164567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35341 <nil> <nil>}
	I0314 01:07:24.304200 2164567 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-183952 && echo "no-preload-183952" | sudo tee /etc/hostname
	I0314 01:07:24.460672 2164567 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-183952
	
	I0314 01:07:24.460752 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:24.484941 2164567 main.go:141] libmachine: Using SSH client type: native
	I0314 01:07:24.485201 2164567 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 35341 <nil> <nil>}
	I0314 01:07:24.485224 2164567 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-183952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-183952/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-183952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 01:07:24.635370 2164567 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 01:07:24.635433 2164567 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18375-1958430/.minikube CaCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18375-1958430/.minikube}
	I0314 01:07:24.635508 2164567 ubuntu.go:177] setting up certificates
	I0314 01:07:24.635540 2164567 provision.go:84] configureAuth start
	I0314 01:07:24.635617 2164567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-183952
	I0314 01:07:24.655059 2164567 provision.go:143] copyHostCerts
	I0314 01:07:24.655125 2164567 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem, removing ...
	I0314 01:07:24.655140 2164567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem
	I0314 01:07:24.655325 2164567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.pem (1078 bytes)
	I0314 01:07:24.655486 2164567 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem, removing ...
	I0314 01:07:24.655498 2164567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem
	I0314 01:07:24.655574 2164567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/cert.pem (1123 bytes)
	I0314 01:07:24.655694 2164567 exec_runner.go:144] found /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem, removing ...
	I0314 01:07:24.655706 2164567 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem
	I0314 01:07:24.655735 2164567 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18375-1958430/.minikube/key.pem (1675 bytes)
	I0314 01:07:24.655801 2164567 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem org=jenkins.no-preload-183952 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-183952]
	I0314 01:07:25.097864 2164567 provision.go:177] copyRemoteCerts
	I0314 01:07:25.097943 2164567 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 01:07:25.098025 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:25.122991 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:25.226567 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 01:07:25.253063 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 01:07:25.279505 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 01:07:25.305843 2164567 provision.go:87] duration metric: took 670.257775ms to configureAuth
	I0314 01:07:25.305871 2164567 ubuntu.go:193] setting minikube options for container-runtime
	I0314 01:07:25.306116 2164567 config.go:182] Loaded profile config "no-preload-183952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0314 01:07:25.306143 2164567 machine.go:97] duration metric: took 4.18418088s to provisionDockerMachine
	I0314 01:07:25.306165 2164567 start.go:293] postStartSetup for "no-preload-183952" (driver="docker")
	I0314 01:07:25.306185 2164567 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 01:07:25.306268 2164567 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 01:07:25.306332 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:25.322637 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:25.424623 2164567 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 01:07:25.427834 2164567 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 01:07:25.427877 2164567 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 01:07:25.427908 2164567 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 01:07:25.427920 2164567 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 01:07:25.427930 2164567 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/addons for local assets ...
	I0314 01:07:25.427999 2164567 filesync.go:126] Scanning /home/jenkins/minikube-integration/18375-1958430/.minikube/files for local assets ...
	I0314 01:07:25.428096 2164567 filesync.go:149] local asset: /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem -> 19638972.pem in /etc/ssl/certs
	I0314 01:07:25.428209 2164567 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 01:07:25.436882 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem --> /etc/ssl/certs/19638972.pem (1708 bytes)
	I0314 01:07:25.461936 2164567 start.go:296] duration metric: took 155.748847ms for postStartSetup
	I0314 01:07:25.462016 2164567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 01:07:25.462061 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:25.478151 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:22.088460 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:24.588300 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:26.589260 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:25.573118 2164567 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 01:07:25.583801 2164567 fix.go:56] duration metric: took 4.874338564s for fixHost
	I0314 01:07:25.583906 2164567 start.go:83] releasing machines lock for "no-preload-183952", held for 4.874463166s
	I0314 01:07:25.584043 2164567 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-183952
	I0314 01:07:25.606858 2164567 ssh_runner.go:195] Run: cat /version.json
	I0314 01:07:25.606940 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:25.606859 2164567 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 01:07:25.607020 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:25.635698 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:25.648690 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:25.865024 2164567 ssh_runner.go:195] Run: systemctl --version
	I0314 01:07:25.869630 2164567 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 01:07:25.874221 2164567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 01:07:25.892867 2164567 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 01:07:25.892991 2164567 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 01:07:25.902008 2164567 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 01:07:25.902030 2164567 start.go:494] detecting cgroup driver to use...
	I0314 01:07:25.902061 2164567 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 01:07:25.902109 2164567 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 01:07:25.916658 2164567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 01:07:25.929516 2164567 docker.go:217] disabling cri-docker service (if available) ...
	I0314 01:07:25.929592 2164567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 01:07:25.942904 2164567 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 01:07:25.955194 2164567 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 01:07:26.052261 2164567 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 01:07:26.152887 2164567 docker.go:233] disabling docker service ...
	I0314 01:07:26.152954 2164567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 01:07:26.166461 2164567 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 01:07:26.179675 2164567 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 01:07:26.274269 2164567 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 01:07:26.368820 2164567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 01:07:26.380779 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 01:07:26.399278 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 01:07:26.410872 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 01:07:26.421146 2164567 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 01:07:26.421238 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 01:07:26.431426 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 01:07:26.441654 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 01:07:26.452564 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 01:07:26.463191 2164567 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 01:07:26.473015 2164567 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 01:07:26.483507 2164567 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 01:07:26.493164 2164567 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 01:07:26.502456 2164567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:07:26.593112 2164567 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 01:07:26.745936 2164567 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0314 01:07:26.746005 2164567 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0314 01:07:26.750477 2164567 start.go:562] Will wait 60s for crictl version
	I0314 01:07:26.750563 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:07:26.754195 2164567 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 01:07:26.798341 2164567 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0314 01:07:26.798413 2164567 ssh_runner.go:195] Run: containerd --version
	I0314 01:07:26.828556 2164567 ssh_runner.go:195] Run: containerd --version
	I0314 01:07:26.855515 2164567 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.6.28 ...
	I0314 01:07:26.857221 2164567 cli_runner.go:164] Run: docker network inspect no-preload-183952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 01:07:26.873259 2164567 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0314 01:07:26.877227 2164567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:07:26.888651 2164567 kubeadm.go:877] updating cluster {Name:no-preload-183952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-183952 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 01:07:26.888783 2164567 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 01:07:26.888836 2164567 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 01:07:26.930315 2164567 containerd.go:612] all images are preloaded for containerd runtime.
	I0314 01:07:26.930338 2164567 cache_images.go:84] Images are preloaded, skipping loading
	I0314 01:07:26.930347 2164567 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.29.0-rc.2 containerd true true} ...
	I0314 01:07:26.930451 2164567 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-183952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-183952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 01:07:26.930525 2164567 ssh_runner.go:195] Run: sudo crictl info
	I0314 01:07:26.974874 2164567 cni.go:84] Creating CNI manager for ""
	I0314 01:07:26.974903 2164567 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 01:07:26.974913 2164567 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 01:07:26.974937 2164567 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-183952 NodeName:no-preload-183952 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 01:07:26.975072 2164567 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-183952"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 01:07:26.975145 2164567 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 01:07:26.985577 2164567 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 01:07:26.985654 2164567 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 01:07:26.994696 2164567 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0314 01:07:27.016959 2164567 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 01:07:27.037144 2164567 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0314 01:07:27.057977 2164567 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0314 01:07:27.062154 2164567 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 01:07:27.074694 2164567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:07:27.171236 2164567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:07:27.186130 2164567 certs.go:68] Setting up /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952 for IP: 192.168.85.2
	I0314 01:07:27.186206 2164567 certs.go:194] generating shared ca certs ...
	I0314 01:07:27.186236 2164567 certs.go:226] acquiring lock for ca certs: {Name:mka77573162012513ec65b9398fcff30bed9742a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:07:27.186447 2164567 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key
	I0314 01:07:27.186525 2164567 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key
	I0314 01:07:27.186560 2164567 certs.go:256] generating profile certs ...
	I0314 01:07:27.186702 2164567 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.key
	I0314 01:07:27.186823 2164567 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/apiserver.key.df3dc527
	I0314 01:07:27.186937 2164567 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/proxy-client.key
	I0314 01:07:27.187096 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897.pem (1338 bytes)
	W0314 01:07:27.187150 2164567 certs.go:480] ignoring /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897_empty.pem, impossibly tiny 0 bytes
	I0314 01:07:27.187172 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 01:07:27.187243 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/ca.pem (1078 bytes)
	I0314 01:07:27.187298 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/cert.pem (1123 bytes)
	I0314 01:07:27.187329 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/key.pem (1675 bytes)
	I0314 01:07:27.187380 2164567 certs.go:484] found cert: /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem (1708 bytes)
	I0314 01:07:27.188059 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 01:07:27.223542 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 01:07:27.253020 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 01:07:27.281774 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 01:07:27.314985 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 01:07:27.361450 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 01:07:27.393334 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 01:07:27.424493 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 01:07:27.461312 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 01:07:27.487959 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/certs/1963897.pem --> /usr/share/ca-certificates/1963897.pem (1338 bytes)
	I0314 01:07:27.528032 2164567 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/ssl/certs/19638972.pem --> /usr/share/ca-certificates/19638972.pem (1708 bytes)
	I0314 01:07:27.554797 2164567 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 01:07:27.573065 2164567 ssh_runner.go:195] Run: openssl version
	I0314 01:07:27.581473 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 01:07:27.593874 2164567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:07:27.600916 2164567 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 00:21 /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:07:27.600991 2164567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 01:07:27.609613 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 01:07:27.619328 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1963897.pem && ln -fs /usr/share/ca-certificates/1963897.pem /etc/ssl/certs/1963897.pem"
	I0314 01:07:27.629265 2164567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1963897.pem
	I0314 01:07:27.633298 2164567 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 00:26 /usr/share/ca-certificates/1963897.pem
	I0314 01:07:27.633365 2164567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1963897.pem
	I0314 01:07:27.640682 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1963897.pem /etc/ssl/certs/51391683.0"
	I0314 01:07:27.651100 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19638972.pem && ln -fs /usr/share/ca-certificates/19638972.pem /etc/ssl/certs/19638972.pem"
	I0314 01:07:27.660927 2164567 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19638972.pem
	I0314 01:07:27.664580 2164567 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 00:26 /usr/share/ca-certificates/19638972.pem
	I0314 01:07:27.664689 2164567 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19638972.pem
	I0314 01:07:27.672113 2164567 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19638972.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 01:07:27.681305 2164567 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 01:07:27.685054 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 01:07:27.691956 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 01:07:27.699377 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 01:07:27.706371 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 01:07:27.713876 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 01:07:27.721098 2164567 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 01:07:27.728218 2164567 kubeadm.go:391] StartCluster: {Name:no-preload-183952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-183952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 01:07:27.728317 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0314 01:07:27.728401 2164567 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 01:07:27.779704 2164567 cri.go:89] found id: "fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:07:27.779728 2164567 cri.go:89] found id: "2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:07:27.779733 2164567 cri.go:89] found id: "cc215e61bd827ccb6ef08e1b6233fc18ffe5891ca3a98ae8b0d074ab971850cd"
	I0314 01:07:27.779743 2164567 cri.go:89] found id: "5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:07:27.779746 2164567 cri.go:89] found id: "dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:07:27.779750 2164567 cri.go:89] found id: "f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:07:27.779756 2164567 cri.go:89] found id: "4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:07:27.779759 2164567 cri.go:89] found id: "35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:07:27.779762 2164567 cri.go:89] found id: ""
	I0314 01:07:27.779820 2164567 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0314 01:07:27.807738 2164567 cri.go:116] JSON = null
	W0314 01:07:27.807787 2164567 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0314 01:07:27.807871 2164567 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 01:07:27.820391 2164567 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 01:07:27.820459 2164567 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 01:07:27.820478 2164567 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 01:07:27.820579 2164567 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 01:07:27.834599 2164567 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 01:07:27.835412 2164567 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-183952" does not appear in /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:07:27.835789 2164567 kubeconfig.go:62] /home/jenkins/minikube-integration/18375-1958430/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-183952" cluster setting kubeconfig missing "no-preload-183952" context setting]
	I0314 01:07:27.836394 2164567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/kubeconfig: {Name:mkdddca847fdd161b32ac7434f6b37d491dbdecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:07:27.838061 2164567 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 01:07:27.851413 2164567 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0314 01:07:27.851491 2164567 kubeadm.go:591] duration metric: took 30.991125ms to restartPrimaryControlPlane
	I0314 01:07:27.851515 2164567 kubeadm.go:393] duration metric: took 123.30567ms to StartCluster
	I0314 01:07:27.851566 2164567 settings.go:142] acquiring lock: {Name:mkb041dc79ae1947b27d39dd7ebbd3bd473ee07d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:07:27.851655 2164567 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:07:27.852738 2164567 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/kubeconfig: {Name:mkdddca847fdd161b32ac7434f6b37d491dbdecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 01:07:27.853025 2164567 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0314 01:07:27.855376 2164567 out.go:177] * Verifying Kubernetes components...
	I0314 01:07:27.853539 2164567 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 01:07:27.853634 2164567 config.go:182] Loaded profile config "no-preload-183952": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0314 01:07:27.857560 2164567 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 01:07:27.857689 2164567 addons.go:69] Setting storage-provisioner=true in profile "no-preload-183952"
	I0314 01:07:27.857728 2164567 addons.go:234] Setting addon storage-provisioner=true in "no-preload-183952"
	W0314 01:07:27.857763 2164567 addons.go:243] addon storage-provisioner should already be in state true
	I0314 01:07:27.857806 2164567 host.go:66] Checking if "no-preload-183952" exists ...
	I0314 01:07:27.858225 2164567 addons.go:69] Setting default-storageclass=true in profile "no-preload-183952"
	I0314 01:07:27.858295 2164567 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-183952"
	I0314 01:07:27.858699 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:27.858899 2164567 addons.go:69] Setting metrics-server=true in profile "no-preload-183952"
	I0314 01:07:27.858951 2164567 addons.go:234] Setting addon metrics-server=true in "no-preload-183952"
	W0314 01:07:27.858971 2164567 addons.go:243] addon metrics-server should already be in state true
	I0314 01:07:27.859357 2164567 host.go:66] Checking if "no-preload-183952" exists ...
	I0314 01:07:27.859100 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:27.859114 2164567 addons.go:69] Setting dashboard=true in profile "no-preload-183952"
	I0314 01:07:27.860307 2164567 addons.go:234] Setting addon dashboard=true in "no-preload-183952"
	W0314 01:07:27.860317 2164567 addons.go:243] addon dashboard should already be in state true
	I0314 01:07:27.860341 2164567 host.go:66] Checking if "no-preload-183952" exists ...
	I0314 01:07:27.860727 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:27.863495 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:27.930700 2164567 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 01:07:27.929380 2164567 addons.go:234] Setting addon default-storageclass=true in "no-preload-183952"
	I0314 01:07:27.935037 2164567 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0314 01:07:27.932965 2164567 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0314 01:07:27.933079 2164567 addons.go:243] addon default-storageclass should already be in state true
	I0314 01:07:27.934251 2164567 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:07:27.937739 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 01:07:27.937817 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:27.942550 2164567 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 01:07:27.942573 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 01:07:27.942644 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:27.944454 2164567 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0314 01:07:27.938373 2164567 host.go:66] Checking if "no-preload-183952" exists ...
	I0314 01:07:27.956160 2164567 cli_runner.go:164] Run: docker container inspect no-preload-183952 --format={{.State.Status}}
	I0314 01:07:27.956516 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0314 01:07:27.956530 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0314 01:07:27.956576 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:27.985756 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:28.019009 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:28.026897 2164567 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 01:07:28.026919 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 01:07:28.026995 2164567 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-183952
	I0314 01:07:28.038264 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:28.062221 2164567 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35341 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/no-preload-183952/id_rsa Username:docker}
	I0314 01:07:28.110141 2164567 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 01:07:28.251805 2164567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:07:28.280220 2164567 node_ready.go:35] waiting up to 6m0s for node "no-preload-183952" to be "Ready" ...
	I0314 01:07:28.387475 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0314 01:07:28.387527 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0314 01:07:28.430031 2164567 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 01:07:28.430094 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 01:07:28.488954 2164567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 01:07:28.508292 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0314 01:07:28.508323 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0314 01:07:28.532220 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0314 01:07:28.532259 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0314 01:07:28.624793 2164567 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0314 01:07:28.624834 2164567 retry.go:31] will retry after 337.812583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp 127.0.0.1:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0314 01:07:28.660024 2164567 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 01:07:28.660052 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 01:07:28.756604 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0314 01:07:28.756630 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0314 01:07:28.819779 2164567 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:07:28.819806 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 01:07:28.923667 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0314 01:07:28.923695 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0314 01:07:28.941246 2164567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 01:07:28.963556 2164567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 01:07:29.061616 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0314 01:07:29.061644 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0314 01:07:29.185222 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0314 01:07:29.185248 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0314 01:07:29.412568 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0314 01:07:29.412598 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0314 01:07:29.451801 2164567 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:07:29.451828 2164567 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0314 01:07:29.520335 2164567 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0314 01:07:29.089477 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:31.587727 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:34.167773 2164567 node_ready.go:49] node "no-preload-183952" has status "Ready":"True"
	I0314 01:07:34.167807 2164567 node_ready.go:38] duration metric: took 5.887546451s for node "no-preload-183952" to be "Ready" ...
	I0314 01:07:34.167817 2164567 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:07:34.187955 2164567 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-b679l" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.201476 2164567 pod_ready.go:92] pod "coredns-76f75df574-b679l" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:34.201503 2164567 pod_ready.go:81] duration metric: took 13.515783ms for pod "coredns-76f75df574-b679l" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.201523 2164567 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.247603 2164567 pod_ready.go:92] pod "etcd-no-preload-183952" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:34.247638 2164567 pod_ready.go:81] duration metric: took 46.105798ms for pod "etcd-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.247654 2164567 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.272018 2164567 pod_ready.go:92] pod "kube-apiserver-no-preload-183952" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:34.272050 2164567 pod_ready.go:81] duration metric: took 24.388251ms for pod "kube-apiserver-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.272065 2164567 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.278387 2164567 pod_ready.go:92] pod "kube-controller-manager-no-preload-183952" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:34.278426 2164567 pod_ready.go:81] duration metric: took 6.352536ms for pod "kube-controller-manager-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.278438 2164567 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h2xc2" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.372676 2164567 pod_ready.go:92] pod "kube-proxy-h2xc2" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:34.372710 2164567 pod_ready.go:81] duration metric: took 94.263548ms for pod "kube-proxy-h2xc2" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.372723 2164567 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:34.683227 2164567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.194213562s)
	I0314 01:07:33.592793 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:36.088446 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:36.380854 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:37.039651 2164567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.098355473s)
	I0314 01:07:37.039722 2164567 addons.go:470] Verifying addon metrics-server=true in "no-preload-183952"
	I0314 01:07:37.148671 2164567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.185065466s)
	I0314 01:07:37.279995 2164567 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.759585812s)
	I0314 01:07:37.282312 2164567 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-183952 addons enable metrics-server
	
	I0314 01:07:37.284290 2164567 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0314 01:07:37.286172 2164567 addons.go:505] duration metric: took 9.432632203s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0314 01:07:38.878710 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:38.588291 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:41.087906 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:40.878808 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:42.879296 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:45.379791 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:43.088400 2159335 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:44.088467 2159335 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:44.088496 2159335 pod_ready.go:81] duration metric: took 1m5.506978254s for pod "kube-controller-manager-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.088508 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vm5pd" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.093958 2159335 pod_ready.go:92] pod "kube-proxy-vm5pd" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:44.093985 2159335 pod_ready.go:81] duration metric: took 5.469145ms for pod "kube-proxy-vm5pd" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:44.093997 2159335 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:46.100906 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:47.435428 2164567 pod_ready.go:102] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:48.380337 2164567 pod_ready.go:92] pod "kube-scheduler-no-preload-183952" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:48.380358 2164567 pod_ready.go:81] duration metric: took 14.007627605s for pod "kube-scheduler-no-preload-183952" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:48.380369 2164567 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:50.387003 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:48.601389 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:51.099734 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:52.387181 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:54.387729 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:53.101296 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:55.101787 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:56.887470 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:59.387378 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:57.600299 2159335 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"False"
	I0314 01:07:59.106087 2159335 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace has status "Ready":"True"
	I0314 01:07:59.106115 2159335 pod_ready.go:81] duration metric: took 15.012109393s for pod "kube-scheduler-old-k8s-version-023742" in "kube-system" namespace to be "Ready" ...
	I0314 01:07:59.106127 2159335 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace to be "Ready" ...
	I0314 01:08:01.112043 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:01.387562 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:03.887282 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:03.113156 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:05.613131 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:06.387319 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:08.887539 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:08.113014 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:10.113211 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:11.386615 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:13.387493 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:12.614105 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:15.113089 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:15.886396 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:17.887031 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:19.887635 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:17.614005 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:20.113930 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:22.387006 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:24.387437 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:22.613294 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:25.112900 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:26.387815 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:28.886694 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:27.612495 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:30.114205 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:30.887780 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:33.386831 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:35.387404 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:32.612283 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:34.612593 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:36.613623 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:37.887560 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:40.386816 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:39.113089 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:41.612451 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:42.387159 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:44.887374 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:43.613091 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:46.116130 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:47.387962 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:49.887170 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:48.612362 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:50.612447 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:51.888356 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:54.386984 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:52.612689 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:55.113105 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:56.887523 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:58.888011 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:57.612018 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:08:59.612746 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:01.387679 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:03.887039 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:02.113430 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:04.612370 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:05.887869 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:08.387250 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:10.388097 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:07.113178 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:09.612196 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:12.886944 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:15.386918 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:12.113067 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:14.612742 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:17.887326 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:20.387845 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:17.112547 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:19.612698 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:21.612869 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:22.887476 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:25.389335 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:23.617303 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:26.113238 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:27.887020 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:29.887285 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:28.114666 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:30.612076 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:32.387495 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:34.388112 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:32.612416 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:34.612869 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:36.613188 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:36.388145 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:38.886874 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:38.613854 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:40.680861 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:40.887708 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:43.386550 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:45.389402 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:43.112789 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:45.128449 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:47.887089 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:49.888177 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:47.611451 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:49.615256 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:52.387158 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:54.886211 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:52.112495 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:54.612531 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:56.612712 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:56.887188 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:59.386541 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:09:58.616059 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:01.113394 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:01.387343 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:03.886778 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:03.612567 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:05.612888 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:05.886894 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:07.888205 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:10.386524 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:08.112406 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:10.113435 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:12.386719 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:14.386808 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:12.114104 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:14.613004 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:16.387502 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:18.387556 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:17.112735 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:19.613008 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:20.886834 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:22.886910 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:24.887488 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:22.121318 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:24.612298 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:26.613054 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:27.387036 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:29.887271 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:29.112840 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:31.163820 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:32.386651 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:34.887022 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:33.611968 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:35.612071 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:37.386871 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:39.886406 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:38.112892 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:40.113241 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:41.886512 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:44.386946 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:42.116762 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:44.612296 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:46.886412 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:48.887110 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:47.112173 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:49.114394 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:51.611692 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:51.387022 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:53.886902 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:53.612772 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:56.112956 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:56.387146 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:58.886494 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:10:58.612462 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:01.113085 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:00.886737 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:03.386363 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:05.386837 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:03.612572 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:06.112431 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:07.886728 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:10.386573 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:08.612641 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:11.112936 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:12.386846 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:14.388063 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:13.613871 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:16.112520 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:16.886435 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:18.887242 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:18.113134 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:20.114090 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:21.387960 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:23.886894 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:22.611978 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:24.618647 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:26.387241 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:28.886834 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:27.113224 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:29.611941 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:30.887460 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:33.386893 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:32.111941 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:34.114194 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:36.611543 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:35.886331 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:37.886975 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:40.386944 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:38.624690 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:41.112194 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:42.387245 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:44.886999 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:43.112729 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:45.125051 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:47.389104 2164567 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:48.387655 2164567 pod_ready.go:81] duration metric: took 4m0.00727494s for pod "metrics-server-57f55c9bc5-hk6xd" in "kube-system" namespace to be "Ready" ...
	E0314 01:11:48.387682 2164567 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:11:48.387701 2164567 pod_ready.go:38] duration metric: took 4m14.219872314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:11:48.387715 2164567 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:11:48.387745 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:11:48.387821 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:11:48.435303 2164567 cri.go:89] found id: "83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:48.435330 2164567 cri.go:89] found id: "35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:48.435337 2164567 cri.go:89] found id: ""
	I0314 01:11:48.435345 2164567 logs.go:276] 2 containers: [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f]
	I0314 01:11:48.435424 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.439114 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.442788 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0314 01:11:48.442872 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:11:48.486625 2164567 cri.go:89] found id: "edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:48.486647 2164567 cri.go:89] found id: "4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:48.486653 2164567 cri.go:89] found id: ""
	I0314 01:11:48.486661 2164567 logs.go:276] 2 containers: [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b]
	I0314 01:11:48.486720 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.491928 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.495470 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0314 01:11:48.495554 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:11:48.536471 2164567 cri.go:89] found id: "f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:48.536494 2164567 cri.go:89] found id: "fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:48.536499 2164567 cri.go:89] found id: ""
	I0314 01:11:48.536507 2164567 logs.go:276] 2 containers: [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602]
	I0314 01:11:48.536584 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.540813 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.544961 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:11:48.545049 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:11:48.581919 2164567 cri.go:89] found id: "1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:48.581939 2164567 cri.go:89] found id: "f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:48.581944 2164567 cri.go:89] found id: ""
	I0314 01:11:48.581951 2164567 logs.go:276] 2 containers: [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad]
	I0314 01:11:48.582008 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.585775 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.589347 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:11:48.589448 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:11:48.639954 2164567 cri.go:89] found id: "785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:48.639991 2164567 cri.go:89] found id: "5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:48.639996 2164567 cri.go:89] found id: ""
	I0314 01:11:48.640004 2164567 logs.go:276] 2 containers: [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c]
	I0314 01:11:48.640095 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.643855 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.647443 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:11:48.647511 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:11:48.691880 2164567 cri.go:89] found id: "989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:48.691900 2164567 cri.go:89] found id: "dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:48.691904 2164567 cri.go:89] found id: ""
	I0314 01:11:48.691912 2164567 logs.go:276] 2 containers: [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a]
	I0314 01:11:48.691966 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.699850 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.704329 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0314 01:11:48.704434 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:11:48.741931 2164567 cri.go:89] found id: "4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:48.741991 2164567 cri.go:89] found id: "2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:48.742010 2164567 cri.go:89] found id: ""
	I0314 01:11:48.742042 2164567 logs.go:276] 2 containers: [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f]
	I0314 01:11:48.742119 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.746396 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.749952 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:11:48.750052 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:11:48.791030 2164567 cri.go:89] found id: "4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:11:48.791055 2164567 cri.go:89] found id: "ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:48.791059 2164567 cri.go:89] found id: ""
	I0314 01:11:48.791067 2164567 logs.go:276] 2 containers: [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f]
	I0314 01:11:48.791140 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.794783 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.798344 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:11:48.798418 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:11:48.844369 2164567 cri.go:89] found id: "f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:48.844444 2164567 cri.go:89] found id: ""
	I0314 01:11:48.844469 2164567 logs.go:276] 1 containers: [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a]
	I0314 01:11:48.844559 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:48.848604 2164567 logs.go:123] Gathering logs for coredns [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0] ...
	I0314 01:11:48.848677 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:48.902774 2164567 logs.go:123] Gathering logs for kube-proxy [5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c] ...
	I0314 01:11:48.902803 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:48.950625 2164567 logs.go:123] Gathering logs for kube-controller-manager [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f] ...
	I0314 01:11:48.950655 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:49.015633 2164567 logs.go:123] Gathering logs for dmesg ...
	I0314 01:11:49.015668 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:11:49.035807 2164567 logs.go:123] Gathering logs for etcd [4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b] ...
	I0314 01:11:49.035839 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:49.086580 2164567 logs.go:123] Gathering logs for kube-scheduler [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240] ...
	I0314 01:11:49.086613 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:49.136295 2164567 logs.go:123] Gathering logs for kube-apiserver [35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f] ...
	I0314 01:11:49.136328 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:49.201133 2164567 logs.go:123] Gathering logs for kube-apiserver [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6] ...
	I0314 01:11:49.201162 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:49.255233 2164567 logs.go:123] Gathering logs for kube-scheduler [f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad] ...
	I0314 01:11:49.255264 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:49.296441 2164567 logs.go:123] Gathering logs for kube-proxy [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1] ...
	I0314 01:11:49.296468 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:49.337849 2164567 logs.go:123] Gathering logs for storage-provisioner [ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f] ...
	I0314 01:11:49.337877 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:49.381238 2164567 logs.go:123] Gathering logs for kubernetes-dashboard [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a] ...
	I0314 01:11:49.381267 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:49.429871 2164567 logs.go:123] Gathering logs for containerd ...
	I0314 01:11:49.429901 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0314 01:11:49.501362 2164567 logs.go:123] Gathering logs for kubelet ...
	I0314 01:11:49.501403 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:11:49.578568 2164567 logs.go:123] Gathering logs for etcd [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4] ...
	I0314 01:11:49.578604 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:49.628532 2164567 logs.go:123] Gathering logs for coredns [fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602] ...
	I0314 01:11:49.628561 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:49.680248 2164567 logs.go:123] Gathering logs for kube-controller-manager [dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a] ...
	I0314 01:11:49.680281 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:49.746183 2164567 logs.go:123] Gathering logs for kindnet [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af] ...
	I0314 01:11:49.746235 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:49.800998 2164567 logs.go:123] Gathering logs for kindnet [2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f] ...
	I0314 01:11:49.801032 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:49.841666 2164567 logs.go:123] Gathering logs for storage-provisioner [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4] ...
	I0314 01:11:49.841697 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:11:49.901849 2164567 logs.go:123] Gathering logs for container status ...
	I0314 01:11:49.901926 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:11:49.967719 2164567 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:11:49.967747 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:11:47.612575 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:49.613777 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:52.641248 2164567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:11:52.653698 2164567 api_server.go:72] duration metric: took 4m24.800611548s to wait for apiserver process to appear ...
	I0314 01:11:52.653723 2164567 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:11:52.653761 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:11:52.653821 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:11:52.692179 2164567 cri.go:89] found id: "83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:52.692203 2164567 cri.go:89] found id: "35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:52.692208 2164567 cri.go:89] found id: ""
	I0314 01:11:52.692216 2164567 logs.go:276] 2 containers: [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f]
	I0314 01:11:52.692275 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.695938 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.699539 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0314 01:11:52.699627 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:11:52.748070 2164567 cri.go:89] found id: "edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:52.748091 2164567 cri.go:89] found id: "4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:52.748096 2164567 cri.go:89] found id: ""
	I0314 01:11:52.748102 2164567 logs.go:276] 2 containers: [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b]
	I0314 01:11:52.748179 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.752784 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.756259 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0314 01:11:52.756374 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:11:52.795957 2164567 cri.go:89] found id: "f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:52.795978 2164567 cri.go:89] found id: "fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:52.795984 2164567 cri.go:89] found id: ""
	I0314 01:11:52.795991 2164567 logs.go:276] 2 containers: [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602]
	I0314 01:11:52.796079 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.800034 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.803732 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:11:52.803811 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:11:52.844829 2164567 cri.go:89] found id: "1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:52.844849 2164567 cri.go:89] found id: "f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:52.844854 2164567 cri.go:89] found id: ""
	I0314 01:11:52.844862 2164567 logs.go:276] 2 containers: [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad]
	I0314 01:11:52.844917 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.848797 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.855450 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:11:52.855596 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:11:52.899817 2164567 cri.go:89] found id: "785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:52.899838 2164567 cri.go:89] found id: "5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:52.899843 2164567 cri.go:89] found id: ""
	I0314 01:11:52.899851 2164567 logs.go:276] 2 containers: [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c]
	I0314 01:11:52.899908 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.903968 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.907447 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:11:52.907523 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:11:52.947271 2164567 cri.go:89] found id: "989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:52.947292 2164567 cri.go:89] found id: "dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:52.947297 2164567 cri.go:89] found id: ""
	I0314 01:11:52.947304 2164567 logs.go:276] 2 containers: [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a]
	I0314 01:11:52.947386 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.951288 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:52.956202 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0314 01:11:52.956317 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:11:53.004173 2164567 cri.go:89] found id: "4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:53.004208 2164567 cri.go:89] found id: "2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:53.004213 2164567 cri.go:89] found id: ""
	I0314 01:11:53.004221 2164567 logs.go:276] 2 containers: [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f]
	I0314 01:11:53.004325 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:53.008487 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:53.012244 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:11:53.012372 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:11:53.050843 2164567 cri.go:89] found id: "4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:11:53.050908 2164567 cri.go:89] found id: "ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:53.050931 2164567 cri.go:89] found id: ""
	I0314 01:11:53.050953 2164567 logs.go:276] 2 containers: [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f]
	I0314 01:11:53.051028 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:53.054829 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:53.058593 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:11:53.058728 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:11:53.097581 2164567 cri.go:89] found id: "f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:53.097603 2164567 cri.go:89] found id: ""
	I0314 01:11:53.097611 2164567 logs.go:276] 1 containers: [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a]
	I0314 01:11:53.097696 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:53.101615 2164567 logs.go:123] Gathering logs for etcd [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4] ...
	I0314 01:11:53.101657 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:53.147841 2164567 logs.go:123] Gathering logs for kube-scheduler [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240] ...
	I0314 01:11:53.147874 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:53.190297 2164567 logs.go:123] Gathering logs for kube-proxy [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1] ...
	I0314 01:11:53.190327 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:53.227874 2164567 logs.go:123] Gathering logs for kube-proxy [5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c] ...
	I0314 01:11:53.227905 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:53.265697 2164567 logs.go:123] Gathering logs for storage-provisioner [ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f] ...
	I0314 01:11:53.265727 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:53.310103 2164567 logs.go:123] Gathering logs for kubelet ...
	I0314 01:11:53.310130 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:11:53.386250 2164567 logs.go:123] Gathering logs for dmesg ...
	I0314 01:11:53.386289 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:11:53.412257 2164567 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:11:53.412290 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:11:53.542829 2164567 logs.go:123] Gathering logs for kube-apiserver [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6] ...
	I0314 01:11:53.542926 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:53.599849 2164567 logs.go:123] Gathering logs for kube-apiserver [35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f] ...
	I0314 01:11:53.599885 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:53.650101 2164567 logs.go:123] Gathering logs for kube-scheduler [f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad] ...
	I0314 01:11:53.650134 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:53.692094 2164567 logs.go:123] Gathering logs for kindnet [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af] ...
	I0314 01:11:53.692122 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:53.737288 2164567 logs.go:123] Gathering logs for kindnet [2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f] ...
	I0314 01:11:53.737319 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:53.780846 2164567 logs.go:123] Gathering logs for storage-provisioner [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4] ...
	I0314 01:11:53.780876 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:11:53.824385 2164567 logs.go:123] Gathering logs for kubernetes-dashboard [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a] ...
	I0314 01:11:53.824413 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:53.872487 2164567 logs.go:123] Gathering logs for kube-controller-manager [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f] ...
	I0314 01:11:53.872517 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:53.935251 2164567 logs.go:123] Gathering logs for kube-controller-manager [dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a] ...
	I0314 01:11:53.935284 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:54.008834 2164567 logs.go:123] Gathering logs for containerd ...
	I0314 01:11:54.008871 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0314 01:11:54.075598 2164567 logs.go:123] Gathering logs for etcd [4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b] ...
	I0314 01:11:54.075639 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:54.140283 2164567 logs.go:123] Gathering logs for coredns [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0] ...
	I0314 01:11:54.140316 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:54.191701 2164567 logs.go:123] Gathering logs for coredns [fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602] ...
	I0314 01:11:54.191735 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:54.240460 2164567 logs.go:123] Gathering logs for container status ...
	I0314 01:11:54.240493 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:11:52.112760 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:54.613992 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:56.810101 2164567 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0314 01:11:56.818426 2164567 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0314 01:11:56.819859 2164567 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 01:11:56.819885 2164567 api_server.go:131] duration metric: took 4.166154215s to wait for apiserver health ...
	I0314 01:11:56.819893 2164567 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 01:11:56.819915 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:11:56.820017 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:11:56.872334 2164567 cri.go:89] found id: "83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:56.872369 2164567 cri.go:89] found id: "35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:56.872375 2164567 cri.go:89] found id: ""
	I0314 01:11:56.872383 2164567 logs.go:276] 2 containers: [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f]
	I0314 01:11:56.872456 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.876181 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.879922 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0314 01:11:56.879994 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:11:56.922114 2164567 cri.go:89] found id: "edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:56.922136 2164567 cri.go:89] found id: "4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:56.922141 2164567 cri.go:89] found id: ""
	I0314 01:11:56.922163 2164567 logs.go:276] 2 containers: [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b]
	I0314 01:11:56.922245 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.926094 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.929799 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0314 01:11:56.929901 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:11:56.968442 2164567 cri.go:89] found id: "f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:56.968463 2164567 cri.go:89] found id: "fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:56.968468 2164567 cri.go:89] found id: ""
	I0314 01:11:56.968476 2164567 logs.go:276] 2 containers: [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602]
	I0314 01:11:56.968562 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.972410 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:56.976728 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:11:56.976822 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:11:57.018304 2164567 cri.go:89] found id: "1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:57.018326 2164567 cri.go:89] found id: "f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:57.018332 2164567 cri.go:89] found id: ""
	I0314 01:11:57.018339 2164567 logs.go:276] 2 containers: [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad]
	I0314 01:11:57.018426 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.022688 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.027150 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:11:57.027284 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:11:57.074795 2164567 cri.go:89] found id: "785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:57.074825 2164567 cri.go:89] found id: "5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:57.074830 2164567 cri.go:89] found id: ""
	I0314 01:11:57.074845 2164567 logs.go:276] 2 containers: [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c]
	I0314 01:11:57.074915 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.079429 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.083908 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:11:57.083988 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:11:57.131994 2164567 cri.go:89] found id: "989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:57.132017 2164567 cri.go:89] found id: "dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:57.132022 2164567 cri.go:89] found id: ""
	I0314 01:11:57.132029 2164567 logs.go:276] 2 containers: [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a]
	I0314 01:11:57.132089 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.136043 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.140277 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0314 01:11:57.140404 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:11:57.183416 2164567 cri.go:89] found id: "4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:57.183438 2164567 cri.go:89] found id: "2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:57.183443 2164567 cri.go:89] found id: ""
	I0314 01:11:57.183450 2164567 logs.go:276] 2 containers: [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f]
	I0314 01:11:57.183507 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.187360 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.191010 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:11:57.191098 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:11:57.236121 2164567 cri.go:89] found id: "f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:57.236188 2164567 cri.go:89] found id: ""
	I0314 01:11:57.236203 2164567 logs.go:276] 1 containers: [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a]
	I0314 01:11:57.236268 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.240157 2164567 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:11:57.240230 2164567 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:11:57.279792 2164567 cri.go:89] found id: "4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:11:57.279835 2164567 cri.go:89] found id: "ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:57.279840 2164567 cri.go:89] found id: ""
	I0314 01:11:57.279847 2164567 logs.go:276] 2 containers: [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f]
	I0314 01:11:57.279923 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.283585 2164567 ssh_runner.go:195] Run: which crictl
	I0314 01:11:57.287068 2164567 logs.go:123] Gathering logs for kube-apiserver [83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6] ...
	I0314 01:11:57.287092 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83e70fac6fc9f0a4771305e9540a6589461cc3a2484f50ba174557b1f4dd17a6"
	I0314 01:11:57.354684 2164567 logs.go:123] Gathering logs for kube-scheduler [f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad] ...
	I0314 01:11:57.354875 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f879b7c53550f5f73f52a19c81373092a43f07fa2c0eab968e4ca6e842db66ad"
	I0314 01:11:57.395125 2164567 logs.go:123] Gathering logs for kube-controller-manager [989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f] ...
	I0314 01:11:57.395152 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 989c71405847bd5f227e2c830feb302d068f9ba20658db76cdd765f46c08880f"
	I0314 01:11:57.454958 2164567 logs.go:123] Gathering logs for kindnet [2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f] ...
	I0314 01:11:57.454991 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d000a4f5e324dc6ef2641ef49e48ce29f1951ae66fae9eba1637086079d822f"
	I0314 01:11:57.491878 2164567 logs.go:123] Gathering logs for storage-provisioner [ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f] ...
	I0314 01:11:57.491905 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab9f905640559b9ba4c07f291aa488bc62ba2563d3e461c125a650b1b9613d8f"
	I0314 01:11:57.529340 2164567 logs.go:123] Gathering logs for kubelet ...
	I0314 01:11:57.529370 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 01:11:57.605640 2164567 logs.go:123] Gathering logs for kube-apiserver [35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f] ...
	I0314 01:11:57.605679 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35a448a37fe417bd5ef90a5f2619fd285e7f7b440884d6ac9bf11cbb9b611a8f"
	I0314 01:11:57.663096 2164567 logs.go:123] Gathering logs for coredns [f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0] ...
	I0314 01:11:57.663127 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f951ff206e24865e25fedf9d07490b40ff93469bd720cec184189041d9aebee0"
	I0314 01:11:57.705627 2164567 logs.go:123] Gathering logs for container status ...
	I0314 01:11:57.705656 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:11:57.749919 2164567 logs.go:123] Gathering logs for dmesg ...
	I0314 01:11:57.749954 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:11:57.774608 2164567 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:11:57.774682 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:11:57.921180 2164567 logs.go:123] Gathering logs for etcd [edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4] ...
	I0314 01:11:57.921213 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edd6991f7b2f89493036abb17b74cda6026808614734286a5086dd892d4d83f4"
	I0314 01:11:57.974100 2164567 logs.go:123] Gathering logs for kube-proxy [5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c] ...
	I0314 01:11:57.974130 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b506479b9f8549a1328dcd0ed478bc43502c6dc60ed26325977136263ab523c"
	I0314 01:11:58.017633 2164567 logs.go:123] Gathering logs for kindnet [4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af] ...
	I0314 01:11:58.017717 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a9251917031e35e335bf4d8b0a039d7e6d52c246bb42616850a1b8ac39049af"
	I0314 01:11:58.065294 2164567 logs.go:123] Gathering logs for kubernetes-dashboard [f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a] ...
	I0314 01:11:58.065324 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f87d7177a951ec4f55aba7a435db0cc522f86ad118288135c55ea54eb077660a"
	I0314 01:11:58.113334 2164567 logs.go:123] Gathering logs for containerd ...
	I0314 01:11:58.113359 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0314 01:11:58.177812 2164567 logs.go:123] Gathering logs for etcd [4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b] ...
	I0314 01:11:58.177849 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b18a1225338f6b055a8ed1e1d1fdf13dc3e26f2d1c4e57dacd35453fb92208b"
	I0314 01:11:58.225875 2164567 logs.go:123] Gathering logs for coredns [fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602] ...
	I0314 01:11:58.225905 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fce51585db2ccb145151be13dcfb93f128f9c83b3d5057ed298f4c72a4923602"
	I0314 01:11:58.292330 2164567 logs.go:123] Gathering logs for kube-scheduler [1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240] ...
	I0314 01:11:58.292359 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1831385fcf3d81ce237870ecb4323a71c6f6032b775571ce1bfd9b1aa401f240"
	I0314 01:11:58.348687 2164567 logs.go:123] Gathering logs for kube-proxy [785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1] ...
	I0314 01:11:58.348722 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 785351628119d6ad774adc88f0d82526af3626432b44b411d9c17da27d8784d1"
	I0314 01:11:58.391006 2164567 logs.go:123] Gathering logs for kube-controller-manager [dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a] ...
	I0314 01:11:58.391033 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dedbba40a1e0160ff417b93d3e8c30371b7d47b139e6891c9e49311c61cb963a"
	I0314 01:11:58.447997 2164567 logs.go:123] Gathering logs for storage-provisioner [4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4] ...
	I0314 01:11:58.448033 2164567 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f2288296aec7700bd2c07880a695239698b441d793a71041c1b77e5e2605cf4"
	I0314 01:12:00.996775 2164567 system_pods.go:59] 9 kube-system pods found
	I0314 01:12:00.996893 2164567 system_pods.go:61] "coredns-76f75df574-b679l" [a8613488-4868-46b5-87c7-cfb9393c5067] Running
	I0314 01:12:00.996917 2164567 system_pods.go:61] "etcd-no-preload-183952" [463211fe-6381-4a32-971f-6670ad9ab255] Running
	I0314 01:12:00.996938 2164567 system_pods.go:61] "kindnet-889cc" [0e1acc6d-a249-46e2-89e4-f213ac85b312] Running
	I0314 01:12:00.996967 2164567 system_pods.go:61] "kube-apiserver-no-preload-183952" [01a9effa-64f4-4d5c-99a6-5ba546d57dd1] Running
	I0314 01:12:00.996988 2164567 system_pods.go:61] "kube-controller-manager-no-preload-183952" [427f665d-a800-401d-8fbc-fc6cfb574ebc] Running
	I0314 01:12:00.997006 2164567 system_pods.go:61] "kube-proxy-h2xc2" [1d2e62a6-bcb8-406f-8028-e27659e92061] Running
	I0314 01:12:00.997026 2164567 system_pods.go:61] "kube-scheduler-no-preload-183952" [d0655f60-196e-4e20-b738-13e2c6c93488] Running
	I0314 01:12:00.997057 2164567 system_pods.go:61] "metrics-server-57f55c9bc5-hk6xd" [aea50b1c-2546-4ae8-b466-7b98beded458] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:12:00.997079 2164567 system_pods.go:61] "storage-provisioner" [ce4f8840-68d4-4fdb-9d98-10eb7b0a6e13] Running
	I0314 01:12:00.997103 2164567 system_pods.go:74] duration metric: took 4.177203101s to wait for pod list to return data ...
	I0314 01:12:00.997132 2164567 default_sa.go:34] waiting for default service account to be created ...
	I0314 01:12:01.003476 2164567 default_sa.go:45] found service account: "default"
	I0314 01:12:01.003502 2164567 default_sa.go:55] duration metric: took 6.34932ms for default service account to be created ...
	I0314 01:12:01.003513 2164567 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 01:12:01.012101 2164567 system_pods.go:86] 9 kube-system pods found
	I0314 01:12:01.012204 2164567 system_pods.go:89] "coredns-76f75df574-b679l" [a8613488-4868-46b5-87c7-cfb9393c5067] Running
	I0314 01:12:01.012229 2164567 system_pods.go:89] "etcd-no-preload-183952" [463211fe-6381-4a32-971f-6670ad9ab255] Running
	I0314 01:12:01.012250 2164567 system_pods.go:89] "kindnet-889cc" [0e1acc6d-a249-46e2-89e4-f213ac85b312] Running
	I0314 01:12:01.012271 2164567 system_pods.go:89] "kube-apiserver-no-preload-183952" [01a9effa-64f4-4d5c-99a6-5ba546d57dd1] Running
	I0314 01:12:01.012292 2164567 system_pods.go:89] "kube-controller-manager-no-preload-183952" [427f665d-a800-401d-8fbc-fc6cfb574ebc] Running
	I0314 01:12:01.012313 2164567 system_pods.go:89] "kube-proxy-h2xc2" [1d2e62a6-bcb8-406f-8028-e27659e92061] Running
	I0314 01:12:01.012332 2164567 system_pods.go:89] "kube-scheduler-no-preload-183952" [d0655f60-196e-4e20-b738-13e2c6c93488] Running
	I0314 01:12:01.012357 2164567 system_pods.go:89] "metrics-server-57f55c9bc5-hk6xd" [aea50b1c-2546-4ae8-b466-7b98beded458] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 01:12:01.012383 2164567 system_pods.go:89] "storage-provisioner" [ce4f8840-68d4-4fdb-9d98-10eb7b0a6e13] Running
	I0314 01:12:01.012415 2164567 system_pods.go:126] duration metric: took 8.894601ms to wait for k8s-apps to be running ...
	I0314 01:12:01.012443 2164567 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 01:12:01.012520 2164567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 01:12:01.031440 2164567 system_svc.go:56] duration metric: took 18.988555ms WaitForService to wait for kubelet
	I0314 01:12:01.031515 2164567 kubeadm.go:576] duration metric: took 4m33.178431843s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 01:12:01.031551 2164567 node_conditions.go:102] verifying NodePressure condition ...
	I0314 01:12:01.039637 2164567 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0314 01:12:01.039720 2164567 node_conditions.go:123] node cpu capacity is 2
	I0314 01:12:01.039748 2164567 node_conditions.go:105] duration metric: took 8.176632ms to run NodePressure ...
	I0314 01:12:01.039773 2164567 start.go:240] waiting for startup goroutines ...
	I0314 01:12:01.039807 2164567 start.go:245] waiting for cluster config update ...
	I0314 01:12:01.039832 2164567 start.go:254] writing updated cluster config ...
	I0314 01:12:01.040149 2164567 ssh_runner.go:195] Run: rm -f paused
	I0314 01:12:01.135241 2164567 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 01:12:01.142209 2164567 out.go:177] * Done! kubectl is now configured to use "no-preload-183952" cluster and "default" namespace by default
	I0314 01:11:57.113242 2159335 pod_ready.go:102] pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace has status "Ready":"False"
	I0314 01:11:59.112719 2159335 pod_ready.go:81] duration metric: took 4m0.006577259s for pod "metrics-server-9975d5f86-prnsg" in "kube-system" namespace to be "Ready" ...
	E0314 01:11:59.112744 2159335 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 01:11:59.112753 2159335 pod_ready.go:38] duration metric: took 5m30.159634759s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 01:11:59.112778 2159335 api_server.go:52] waiting for apiserver process to appear ...
	I0314 01:11:59.112807 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0314 01:11:59.112872 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 01:11:59.158140 2159335 cri.go:89] found id: "fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7"
	I0314 01:11:59.158161 2159335 cri.go:89] found id: "cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7"
	I0314 01:11:59.158165 2159335 cri.go:89] found id: ""
	I0314 01:11:59.158172 2159335 logs.go:276] 2 containers: [fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7 cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7]
	I0314 01:11:59.158245 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.161848 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.165307 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0314 01:11:59.165397 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 01:11:59.203039 2159335 cri.go:89] found id: "262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f"
	I0314 01:11:59.203063 2159335 cri.go:89] found id: "1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1"
	I0314 01:11:59.203067 2159335 cri.go:89] found id: ""
	I0314 01:11:59.203075 2159335 logs.go:276] 2 containers: [262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f 1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1]
	I0314 01:11:59.203161 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.206505 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.209749 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0314 01:11:59.209832 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 01:11:59.249575 2159335 cri.go:89] found id: "577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1"
	I0314 01:11:59.249595 2159335 cri.go:89] found id: "7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c"
	I0314 01:11:59.249600 2159335 cri.go:89] found id: ""
	I0314 01:11:59.249607 2159335 logs.go:276] 2 containers: [577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1 7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c]
	I0314 01:11:59.249661 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.253180 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.256325 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0314 01:11:59.256395 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 01:11:59.293600 2159335 cri.go:89] found id: "abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704"
	I0314 01:11:59.293620 2159335 cri.go:89] found id: "89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c"
	I0314 01:11:59.293625 2159335 cri.go:89] found id: ""
	I0314 01:11:59.293632 2159335 logs.go:276] 2 containers: [abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704 89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c]
	I0314 01:11:59.293691 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.297072 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.300603 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0314 01:11:59.300711 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 01:11:59.343506 2159335 cri.go:89] found id: "a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c"
	I0314 01:11:59.343573 2159335 cri.go:89] found id: "36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202"
	I0314 01:11:59.343585 2159335 cri.go:89] found id: ""
	I0314 01:11:59.343593 2159335 logs.go:276] 2 containers: [a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c 36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202]
	I0314 01:11:59.343650 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.347304 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.350557 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 01:11:59.350629 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 01:11:59.391928 2159335 cri.go:89] found id: "c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272"
	I0314 01:11:59.391991 2159335 cri.go:89] found id: "00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b"
	I0314 01:11:59.392003 2159335 cri.go:89] found id: ""
	I0314 01:11:59.392011 2159335 logs.go:276] 2 containers: [c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272 00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b]
	I0314 01:11:59.392078 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.395688 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.399092 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0314 01:11:59.399186 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 01:11:59.437453 2159335 cri.go:89] found id: "661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702"
	I0314 01:11:59.437476 2159335 cri.go:89] found id: "251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650"
	I0314 01:11:59.437481 2159335 cri.go:89] found id: ""
	I0314 01:11:59.437488 2159335 logs.go:276] 2 containers: [661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702 251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650]
	I0314 01:11:59.437547 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.441203 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.444733 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 01:11:59.444868 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 01:11:59.483655 2159335 cri.go:89] found id: "c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4"
	I0314 01:11:59.483715 2159335 cri.go:89] found id: ""
	I0314 01:11:59.483747 2159335 logs.go:276] 1 containers: [c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4]
	I0314 01:11:59.483834 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.487513 2159335 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0314 01:11:59.487586 2159335 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 01:11:59.527664 2159335 cri.go:89] found id: "0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416"
	I0314 01:11:59.527685 2159335 cri.go:89] found id: "4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c"
	I0314 01:11:59.527690 2159335 cri.go:89] found id: ""
	I0314 01:11:59.527697 2159335 logs.go:276] 2 containers: [0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416 4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c]
	I0314 01:11:59.527760 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.531427 2159335 ssh_runner.go:195] Run: which crictl
	I0314 01:11:59.534848 2159335 logs.go:123] Gathering logs for coredns [7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c] ...
	I0314 01:11:59.534873 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c"
	I0314 01:11:59.580402 2159335 logs.go:123] Gathering logs for kube-proxy [a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c] ...
	I0314 01:11:59.580430 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c"
	I0314 01:11:59.621026 2159335 logs.go:123] Gathering logs for kindnet [251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650] ...
	I0314 01:11:59.621056 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650"
	I0314 01:11:59.659038 2159335 logs.go:123] Gathering logs for kubernetes-dashboard [c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4] ...
	I0314 01:11:59.659065 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4"
	I0314 01:11:59.701741 2159335 logs.go:123] Gathering logs for containerd ...
	I0314 01:11:59.701769 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0314 01:11:59.765300 2159335 logs.go:123] Gathering logs for dmesg ...
	I0314 01:11:59.765335 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 01:11:59.790417 2159335 logs.go:123] Gathering logs for etcd [262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f] ...
	I0314 01:11:59.790445 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f"
	I0314 01:11:59.843121 2159335 logs.go:123] Gathering logs for coredns [577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1] ...
	I0314 01:11:59.843149 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1"
	I0314 01:11:59.902029 2159335 logs.go:123] Gathering logs for container status ...
	I0314 01:11:59.902056 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 01:11:59.992718 2159335 logs.go:123] Gathering logs for kubelet ...
	I0314 01:11:59.992749 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0314 01:12:00.060282 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.974732     662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.060539 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.974970     662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.060774 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975144     662 reflector.go:138] object-"kube-system"/"coredns-token-25l2w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-25l2w" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061038 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975368     662 reflector.go:138] object-"kube-system"/"kindnet-token-5q9tx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-5q9tx" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061280 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.975944     662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcjsf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcjsf" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061494 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.979394     662 reflector.go:138] object-"default"/"default-token-rpt2n": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rpt2n" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061719 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.980058     662 reflector.go:138] object-"kube-system"/"metrics-server-token-zxqtp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-zxqtp" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.061941 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:28 old-k8s-version-023742 kubelet[662]: E0314 01:06:28.980106     662 reflector.go:138] object-"kube-system"/"kube-proxy-token-g2rnq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-g2rnq" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.070827 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:31 old-k8s-version-023742 kubelet[662]: E0314 01:06:31.370582     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.073041 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:32 old-k8s-version-023742 kubelet[662]: E0314 01:06:32.031743     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.075814 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:46 old-k8s-version-023742 kubelet[662]: E0314 01:06:46.773820     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.076572 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:48 old-k8s-version-023742 kubelet[662]: E0314 01:06:48.034705     662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-wht5d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-wht5d" is forbidden: User "system:node:old-k8s-version-023742" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-023742' and this object
	W0314 01:12:00.078439 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:59 old-k8s-version-023742 kubelet[662]: E0314 01:06:59.151608     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.078625 2159335 logs.go:138] Found kubelet problem: Mar 14 01:06:59 old-k8s-version-023742 kubelet[662]: E0314 01:06:59.766765     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.078953 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:00 old-k8s-version-023742 kubelet[662]: E0314 01:07:00.185170     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.079500 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:02 old-k8s-version-023742 kubelet[662]: E0314 01:07:02.195749     662 pod_workers.go:191] Error syncing pod 57373aa5-20c2-4873-b3e0-c1c27570f447 ("storage-provisioner_kube-system(57373aa5-20c2-4873-b3e0-c1c27570f447)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(57373aa5-20c2-4873-b3e0-c1c27570f447)"
	W0314 01:12:00.079839 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:03 old-k8s-version-023742 kubelet[662]: E0314 01:07:03.347757     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.082701 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:14 old-k8s-version-023742 kubelet[662]: E0314 01:07:14.815543     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.083162 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:15 old-k8s-version-023742 kubelet[662]: E0314 01:07:15.222109     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.083775 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:23 old-k8s-version-023742 kubelet[662]: E0314 01:07:23.347695     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.083984 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:27 old-k8s-version-023742 kubelet[662]: E0314 01:07:27.771127     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.084571 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:36 old-k8s-version-023742 kubelet[662]: E0314 01:07:36.268490     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.084759 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:38 old-k8s-version-023742 kubelet[662]: E0314 01:07:38.766381     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.085087 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:43 old-k8s-version-023742 kubelet[662]: E0314 01:07:43.347263     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.085270 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:52 old-k8s-version-023742 kubelet[662]: E0314 01:07:52.766485     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.085601 2159335 logs.go:138] Found kubelet problem: Mar 14 01:07:56 old-k8s-version-023742 kubelet[662]: E0314 01:07:56.766099     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.085925 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:07 old-k8s-version-023742 kubelet[662]: E0314 01:08:07.770729     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.088552 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:07 old-k8s-version-023742 kubelet[662]: E0314 01:08:07.790467     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.089148 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:20 old-k8s-version-023742 kubelet[662]: E0314 01:08:20.365151     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.089334 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:20 old-k8s-version-023742 kubelet[662]: E0314 01:08:20.769752     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.089658 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:23 old-k8s-version-023742 kubelet[662]: E0314 01:08:23.347539     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.089844 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:32 old-k8s-version-023742 kubelet[662]: E0314 01:08:32.767876     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.090188 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:35 old-k8s-version-023742 kubelet[662]: E0314 01:08:35.771384     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.090369 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:43 old-k8s-version-023742 kubelet[662]: E0314 01:08:43.766453     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.090695 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:49 old-k8s-version-023742 kubelet[662]: E0314 01:08:49.766428     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.090874 2159335 logs.go:138] Found kubelet problem: Mar 14 01:08:58 old-k8s-version-023742 kubelet[662]: E0314 01:08:58.766502     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.091194 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:02 old-k8s-version-023742 kubelet[662]: E0314 01:09:02.765978     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.091387 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:12 old-k8s-version-023742 kubelet[662]: E0314 01:09:12.766341     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.091707 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:16 old-k8s-version-023742 kubelet[662]: E0314 01:09:16.766009     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.091896 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:25 old-k8s-version-023742 kubelet[662]: E0314 01:09:25.766322     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.092219 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:29 old-k8s-version-023742 kubelet[662]: E0314 01:09:29.766880     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.094659 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:36 old-k8s-version-023742 kubelet[662]: E0314 01:09:36.773716     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0314 01:12:00.095254 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:41 old-k8s-version-023742 kubelet[662]: E0314 01:09:41.548782     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.095595 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:43 old-k8s-version-023742 kubelet[662]: E0314 01:09:43.347327     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.095785 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:50 old-k8s-version-023742 kubelet[662]: E0314 01:09:50.766966     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.096113 2159335 logs.go:138] Found kubelet problem: Mar 14 01:09:56 old-k8s-version-023742 kubelet[662]: E0314 01:09:56.766015     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.096296 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:04 old-k8s-version-023742 kubelet[662]: E0314 01:10:04.766379     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.096625 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:10 old-k8s-version-023742 kubelet[662]: E0314 01:10:10.766119     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.096809 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:15 old-k8s-version-023742 kubelet[662]: E0314 01:10:15.767425     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.097131 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:25 old-k8s-version-023742 kubelet[662]: E0314 01:10:25.766011     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.097313 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:29 old-k8s-version-023742 kubelet[662]: E0314 01:10:29.766425     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.097636 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:40 old-k8s-version-023742 kubelet[662]: E0314 01:10:40.766016     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.097817 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:43 old-k8s-version-023742 kubelet[662]: E0314 01:10:43.767405     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098144 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:54 old-k8s-version-023742 kubelet[662]: E0314 01:10:54.765962     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.098344 2159335 logs.go:138] Found kubelet problem: Mar 14 01:10:55 old-k8s-version-023742 kubelet[662]: E0314 01:10:55.766370     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098533 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:06 old-k8s-version-023742 kubelet[662]: E0314 01:11:06.766830     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.098861 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:08 old-k8s-version-023742 kubelet[662]: E0314 01:11:08.765992     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.099043 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.766986     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.099479 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.771307     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.099671 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.766411     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.100002 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.100332 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.100526 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:00.100851 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:00.101036 2159335 logs.go:138] Found kubelet problem: Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0314 01:12:00.101050 2159335 logs.go:123] Gathering logs for kube-proxy [36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202] ...
	I0314 01:12:00.101074 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202"
	I0314 01:12:00.174338 2159335 logs.go:123] Gathering logs for storage-provisioner [4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c] ...
	I0314 01:12:00.174402 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c"
	I0314 01:12:00.342882 2159335 logs.go:123] Gathering logs for kube-controller-manager [c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272] ...
	I0314 01:12:00.342916 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272"
	I0314 01:12:00.504401 2159335 logs.go:123] Gathering logs for kube-controller-manager [00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b] ...
	I0314 01:12:00.504488 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b"
	I0314 01:12:00.615545 2159335 logs.go:123] Gathering logs for kindnet [661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702] ...
	I0314 01:12:00.615587 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702"
	I0314 01:12:00.668756 2159335 logs.go:123] Gathering logs for storage-provisioner [0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416] ...
	I0314 01:12:00.668788 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416"
	I0314 01:12:00.713317 2159335 logs.go:123] Gathering logs for kube-apiserver [fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7] ...
	I0314 01:12:00.713344 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7"
	I0314 01:12:00.787839 2159335 logs.go:123] Gathering logs for kube-apiserver [cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7] ...
	I0314 01:12:00.787872 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7"
	I0314 01:12:00.855002 2159335 logs.go:123] Gathering logs for etcd [1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1] ...
	I0314 01:12:00.855046 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1"
	I0314 01:12:00.907768 2159335 logs.go:123] Gathering logs for describe nodes ...
	I0314 01:12:00.907808 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 01:12:01.123021 2159335 logs.go:123] Gathering logs for kube-scheduler [abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704] ...
	I0314 01:12:01.123090 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704"
	I0314 01:12:01.210926 2159335 logs.go:123] Gathering logs for kube-scheduler [89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c] ...
	I0314 01:12:01.210954 2159335 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c"
	I0314 01:12:01.269850 2159335 out.go:304] Setting ErrFile to fd 2...
	I0314 01:12:01.270269 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0314 01:12:01.270366 2159335 out.go:239] X Problems detected in kubelet:
	W0314 01:12:01.270411 2159335 out.go:239]   Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270566 2159335 out.go:239]   Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270598 2159335 out.go:239]   Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0314 01:12:01.270630 2159335 out.go:239]   Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	W0314 01:12:01.270666 2159335 out.go:239]   Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0314 01:12:01.270709 2159335 out.go:304] Setting ErrFile to fd 2...
	I0314 01:12:01.270732 2159335 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:12:11.271976 2159335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 01:12:11.284381 2159335 api_server.go:72] duration metric: took 6m1.5739088s to wait for apiserver process to appear ...
	I0314 01:12:11.284404 2159335 api_server.go:88] waiting for apiserver healthz status ...
	I0314 01:12:11.286614 2159335 out.go:177] 
	W0314 01:12:11.288783 2159335 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0314 01:12:11.288808 2159335 out.go:239] * 
	W0314 01:12:11.289707 2159335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 01:12:11.292264 2159335 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	e4fbd91b78323       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   8399c885f589f       dashboard-metrics-scraper-8d5bb5db8-fm8cx
	0378f91f5f30b       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   32e2ec7df0e20       storage-provisioner
	c83ee544525ca       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   d421b8398143b       kubernetes-dashboard-cd95d586-s2g5f
	577d3087fa040       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   a9a250465824c       coredns-74ff55c5b-m88gl
	661201637b621       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   c9cfaef11fbd1       kindnet-jfzdr
	a649cec0099ad       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   7a9d3a40b5580       kube-proxy-vm5pd
	4e7dc51c48baa       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   32e2ec7df0e20       storage-provisioner
	78ce9d358e225       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   1c94cf4d45d0b       busybox
	c79d313c53d51       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   7b095ba6dd479       kube-controller-manager-old-k8s-version-023742
	abe11e046f972       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   9d0d979f67202       kube-scheduler-old-k8s-version-023742
	fdb44b3f1b8bf       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   5f121f8e35507       kube-apiserver-old-k8s-version-023742
	262c4ec2f8f3e       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   deb84b9c2b9c7       etcd-old-k8s-version-023742
	d4ad6ee64f7c4       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   adfb65cc50f1b       busybox
	7d7e6014fc303       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   025d918c8bc5b       coredns-74ff55c5b-m88gl
	36947fb39456b       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   1b83d268fb35d       kube-proxy-vm5pd
	251c6cdd00905       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   bcf6a121da458       kindnet-jfzdr
	1c0e3f86261a4       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   ff6d90560202c       etcd-old-k8s-version-023742
	cf476c9051e98       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   311fb18c2a9b0       kube-apiserver-old-k8s-version-023742
	89cecf527c722       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   778182a8a7f1f       kube-scheduler-old-k8s-version-023742
	00c36bf0f8362       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   914c29becbcdb       kube-controller-manager-old-k8s-version-023742
	
	
	==> containerd <==
	Mar 14 01:08:07 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:07.784320527Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 14 01:08:07 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:07.787167090Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.768816549Z" level=info msg="CreateContainer within sandbox \"8399c885f589f2767fc9cdd73e585f663ae679a7be81b3433802c10fc4c890f6\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.786173409Z" level=info msg="CreateContainer within sandbox \"8399c885f589f2767fc9cdd73e585f663ae679a7be81b3433802c10fc4c890f6\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642\""
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.786630522Z" level=info msg="StartContainer for \"55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642\""
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.876925856Z" level=info msg="StartContainer for \"55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642\" returns successfully"
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.909649967Z" level=info msg="shim disconnected" id=55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.909710282Z" level=warning msg="cleaning up after shim disconnected" id=55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642 namespace=k8s.io
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.909722295Z" level=info msg="cleaning up dead shim"
	Mar 14 01:08:19 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:19.918434012Z" level=warning msg="cleanup warnings time=\"2024-03-14T01:08:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2966 runtime=io.containerd.runc.v2\n"
	Mar 14 01:08:20 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:20.367917094Z" level=info msg="RemoveContainer for \"910d74720fd8c39a2dde9276b2ef5e6ce23ae019daf35b7bc9d6d64f360d6bda\""
	Mar 14 01:08:20 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:08:20.390434694Z" level=info msg="RemoveContainer for \"910d74720fd8c39a2dde9276b2ef5e6ce23ae019daf35b7bc9d6d64f360d6bda\" returns successfully"
	Mar 14 01:09:36 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:36.766846204Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:09:36 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:36.771502684Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 14 01:09:36 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:36.773225907Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.768602744Z" level=info msg="CreateContainer within sandbox \"8399c885f589f2767fc9cdd73e585f663ae679a7be81b3433802c10fc4c890f6\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.784405431Z" level=info msg="CreateContainer within sandbox \"8399c885f589f2767fc9cdd73e585f663ae679a7be81b3433802c10fc4c890f6\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9\""
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.785536181Z" level=info msg="StartContainer for \"e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9\""
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.854682252Z" level=info msg="StartContainer for \"e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9\" returns successfully"
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.898335405Z" level=info msg="shim disconnected" id=e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.898401636Z" level=warning msg="cleaning up after shim disconnected" id=e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9 namespace=k8s.io
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.898415486Z" level=info msg="cleaning up dead shim"
	Mar 14 01:09:40 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:40.907143483Z" level=warning msg="cleanup warnings time=\"2024-03-14T01:09:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3202 runtime=io.containerd.runc.v2\n"
	Mar 14 01:09:41 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:41.550235455Z" level=info msg="RemoveContainer for \"55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642\""
	Mar 14 01:09:41 old-k8s-version-023742 containerd[567]: time="2024-03-14T01:09:41.556865191Z" level=info msg="RemoveContainer for \"55044ea4156380a347c03c212af316ff6c69a097699a9d2f0b2d9708a8a19642\" returns successfully"
	
	
	==> coredns [577d3087fa040f93f9110c5070c4732f3d9c28a3063dad9f5fb63d2cb714c5d1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49019 - 18033 "HINFO IN 8693237155408371918.2373665137395487600. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023531222s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0314 01:07:03.115479       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-14 01:06:33.110757946 +0000 UTC m=+0.029739785) (total time: 30.004594998s):
	Trace[2019727887]: [30.004594998s] [30.004594998s] END
	E0314 01:07:03.115525       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0314 01:07:03.115642       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-14 01:06:33.115328527 +0000 UTC m=+0.034310366) (total time: 30.000302437s):
	Trace[939984059]: [30.000302437s] [30.000302437s] END
	E0314 01:07:03.115653       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0314 01:07:03.115949       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-14 01:06:33.115602208 +0000 UTC m=+0.034584039) (total time: 30.000327905s):
	Trace[911902081]: [30.000327905s] [30.000327905s] END
	E0314 01:07:03.115960       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [7d7e6014fc3038892fc04a7ac247dec3563fd4e1a68251761f5191e67aa3d29c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:53628 - 24383 "HINFO IN 1330114024086565693.6852890089546377471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012593262s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-023742
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-023742
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=eceebabcbdee8f7e371d6df61e2829908b6c6abe
	                    minikube.k8s.io/name=old-k8s-version-023742
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T01_03_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 01:03:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-023742
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 01:12:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 01:07:19 +0000   Thu, 14 Mar 2024 01:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 01:07:19 +0000   Thu, 14 Mar 2024 01:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 01:07:19 +0000   Thu, 14 Mar 2024 01:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 01:07:19 +0000   Thu, 14 Mar 2024 01:03:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-023742
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 f52f9fec265543c4b12231dd14316f16
	  System UUID:                efb625a7-7f64-4859-8e8f-772c6b8b7889
	  Boot ID:                    ae603cd7-e506-4ea2-a0e0-984864774a93
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	  kube-system                 coredns-74ff55c5b-m88gl                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m20s
	  kube-system                 etcd-old-k8s-version-023742                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-jfzdr                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m20s
	  kube-system                 kube-apiserver-old-k8s-version-023742             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-old-k8s-version-023742    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-proxy-vm5pd                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	  kube-system                 kube-scheduler-old-k8s-version-023742             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 metrics-server-9975d5f86-prnsg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-fm8cx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-s2g5f               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m27s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m27s                  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                  kubelet     Node old-k8s-version-023742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s                  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m20s                  kubelet     Node old-k8s-version-023742 status is now: NodeReady
	  Normal  Starting                 8m19s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-023742 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001209] FS-Cache: N-cookie c=00000078 [p=0000006f fl=2 nc=0 na=1]
	[  +0.001054] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000a50362d2
	[  +0.001151] FS-Cache: N-key=[8] '13455c0100000000'
	[  +0.002651] FS-Cache: Duplicate cookie detected
	[  +0.000691] FS-Cache: O-cookie c=00000072 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001082] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=000000009688ab1c
	[  +0.001075] FS-Cache: O-key=[8] '13455c0100000000'
	[  +0.000721] FS-Cache: N-cookie c=00000079 [p=0000006f fl=2 nc=0 na=1]
	[  +0.000962] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000287e5f22
	[  +0.001127] FS-Cache: N-key=[8] '13455c0100000000'
	[  +2.832003] FS-Cache: Duplicate cookie detected
	[  +0.000710] FS-Cache: O-cookie c=00000070 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001058] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=000000002897a5bc
	[  +0.001057] FS-Cache: O-key=[8] '12455c0100000000'
	[  +0.000800] FS-Cache: N-cookie c=0000007b [p=0000006f fl=2 nc=0 na=1]
	[  +0.001025] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000473ae0a8
	[  +0.001093] FS-Cache: N-key=[8] '12455c0100000000'
	[  +0.405755] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=00000075 [p=0000006f fl=226 nc=0 na=1]
	[  +0.001051] FS-Cache: O-cookie d=00000000dfa10ab5{9p.inode} n=000000002e6030fa
	[  +0.001195] FS-Cache: O-key=[8] '19455c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=0000007c [p=0000006f fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000dfa10ab5{9p.inode} n=00000000a50362d2
	[  +0.001117] FS-Cache: N-key=[8] '19455c0100000000'
	[Mar14 00:59] overlayfs: failed to resolve '/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/25/fs': -2
	
	
	==> etcd [1c0e3f86261a4182fbe7a919791938f3f1b269a39c811b556df8aa8f15b310f1] <==
	raft2024/03/14 01:03:28 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/14 01:03:28 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/14 01:03:28 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-14 01:03:28.111348 I | etcdserver: published {Name:old-k8s-version-023742 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-14 01:03:28.111855 I | embed: ready to serve client requests
	2024-03-14 01:03:28.112997 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-14 01:03:28.114532 I | embed: ready to serve client requests
	2024-03-14 01:03:28.119641 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-14 01:03:28.135085 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-14 01:03:28.151244 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-14 01:03:28.151529 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-14 01:03:37.009537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:03:48.874818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:03:50.962939 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:00.963156 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:10.963085 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:20.963113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:30.963126 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:40.963057 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:04:50.963233 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:05:00.963069 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:05:10.963227 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:05:20.963051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:05:30.963332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:05:40.963143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [262c4ec2f8f3eb8483e1462b0685a765d10399ebc85e0a7f520af4504e9e469f] <==
	2024-03-14 01:08:06.278137 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:08:16.278254 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:08:26.278098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:08:36.278195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:08:46.278063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:08:56.278113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:06.278235 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:16.278147 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:26.278282 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:36.278182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:46.278160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:09:56.278217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:06.278109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:16.278124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:26.278143 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:36.278050 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:46.278144 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:10:56.278089 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:06.278155 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:16.278357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:26.278267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:36.278327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:46.278142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:11:56.278179 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-14 01:12:06.278192 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:12:13 up  8:54,  0 users,  load average: 0.24, 1.61, 2.25
	Linux old-k8s-version-023742 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [251c6cdd00905f46268a5714aa17f44694250fe47496912c484048b8908e5650] <==
	I0314 01:03:53.890191       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0314 01:03:53.890256       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0314 01:03:53.890370       1 main.go:116] setting mtu 1500 for CNI 
	I0314 01:03:53.890380       1 main.go:146] kindnetd IP family: "ipv4"
	I0314 01:03:53.890392       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 01:04:24.121474       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0314 01:04:24.135706       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:04:24.135742       1 main.go:227] handling current node
	I0314 01:04:34.154401       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:04:34.154515       1 main.go:227] handling current node
	I0314 01:04:44.175454       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:04:44.175483       1 main.go:227] handling current node
	I0314 01:04:54.188178       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:04:54.188207       1 main.go:227] handling current node
	I0314 01:05:04.209211       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:05:04.209243       1 main.go:227] handling current node
	I0314 01:05:14.229594       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:05:14.229635       1 main.go:227] handling current node
	I0314 01:05:24.233610       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:05:24.233639       1 main.go:227] handling current node
	I0314 01:05:34.266625       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:05:34.266659       1 main.go:227] handling current node
	I0314 01:05:44.279149       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:05:44.279473       1 main.go:227] handling current node
	
	
	==> kindnet [661201637b62150c5e65fb761af09186766a8c5ee343b157bd78587674493702] <==
	I0314 01:10:04.022272       1 main.go:227] handling current node
	I0314 01:10:14.043888       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:10:14.043916       1 main.go:227] handling current node
	I0314 01:10:24.056219       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:10:24.056248       1 main.go:227] handling current node
	I0314 01:10:34.060169       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:10:34.060207       1 main.go:227] handling current node
	I0314 01:10:44.081036       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:10:44.081065       1 main.go:227] handling current node
	I0314 01:10:54.100492       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:10:54.100526       1 main.go:227] handling current node
	I0314 01:11:04.124493       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:04.124525       1 main.go:227] handling current node
	I0314 01:11:14.135557       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:14.135839       1 main.go:227] handling current node
	I0314 01:11:24.153588       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:24.153617       1 main.go:227] handling current node
	I0314 01:11:34.164460       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:34.164491       1 main.go:227] handling current node
	I0314 01:11:44.176129       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:44.176169       1 main.go:227] handling current node
	I0314 01:11:54.188397       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:11:54.188431       1 main.go:227] handling current node
	I0314 01:12:04.205964       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0314 01:12:04.205993       1 main.go:227] handling current node
	
	
	==> kube-apiserver [cf476c9051e98ad0797f500808cadc3fa875ce3d5892a9524d352ac49530d1c7] <==
	I0314 01:03:35.397396       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0314 01:03:35.397419       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0314 01:03:35.886154       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 01:03:35.940898       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0314 01:03:36.083445       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0314 01:03:36.088814       1 controller.go:606] quota admission added evaluator for: endpoints
	I0314 01:03:36.095951       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 01:03:37.017793       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0314 01:03:37.567500       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0314 01:03:37.685103       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0314 01:03:46.090127       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 01:03:53.003061       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0314 01:03:53.012640       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0314 01:03:59.495561       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:03:59.495608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:03:59.495617       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:04:42.580998       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:04:42.581046       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:04:42.581054       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:05:12.948919       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:05:12.948966       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:05:12.949001       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:05:47.367401       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:05:47.367442       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:05:47.367450       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [fdb44b3f1b8bfa78a5d1d74ef6b0613a584997cf3688d648110b83847d6bfbb7] <==
	I0314 01:08:49.084190       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:08:49.084199       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:09:24.271464       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:09:24.271512       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:09:24.271521       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0314 01:09:31.929674       1 handler_proxy.go:102] no RequestInfo found in the context
	E0314 01:09:31.929789       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:09:31.929830       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:10:03.588184       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:10:03.588358       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:10:03.588396       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:10:45.860086       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:10:45.860345       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:10:45.860486       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0314 01:11:29.043938       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:11:29.043982       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:11:29.043992       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0314 01:11:30.009632       1 handler_proxy.go:102] no RequestInfo found in the context
	E0314 01:11:30.009725       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 01:11:30.009737       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 01:12:01.098772       1 client.go:360] parsed scheme: "passthrough"
	I0314 01:12:01.098934       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0314 01:12:01.098949       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [00c36bf0f8362e90a4a6eb52e67b0fde8d3b9e110159c37ce5cff0a34795231b] <==
	I0314 01:03:53.005791       1 shared_informer.go:247] Caches are synced for PVC protection 
	E0314 01:03:53.008203       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0314 01:03:53.008953       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0314 01:03:53.028753       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0314 01:03:53.034126       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0314 01:03:53.059613       1 shared_informer.go:247] Caches are synced for attach detach 
	I0314 01:03:53.069776       1 shared_informer.go:247] Caches are synced for PV protection 
	I0314 01:03:53.081622       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vm5pd"
	I0314 01:03:53.090589       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jfzdr"
	I0314 01:03:53.091992       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-qzm6v"
	I0314 01:03:53.110914       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0314 01:03:53.117643       1 shared_informer.go:247] Caches are synced for expand 
	I0314 01:03:53.146205       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0314 01:03:53.161353       1 shared_informer.go:247] Caches are synced for resource quota 
	I0314 01:03:53.172312       1 shared_informer.go:247] Caches are synced for resource quota 
	I0314 01:03:53.208986       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-m88gl"
	I0314 01:03:53.223092       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0314 01:03:53.352887       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0314 01:03:53.613729       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0314 01:03:53.613791       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0314 01:03:53.653059       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0314 01:03:54.467334       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0314 01:03:54.493739       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-qzm6v"
	I0314 01:03:57.925351       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0314 01:05:48.361021       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-controller-manager [c79d313c53d51ee00f93e8076ad26cc072bc1abea5f1a5b6a64db439ffbbd272] <==
	W0314 01:07:53.392223       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:08:19.421316       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:08:25.042672       1 request.go:655] Throttling request took 1.04839733s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0314 01:08:25.894129       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:08:49.923150       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:08:57.544490       1 request.go:655] Throttling request took 1.048266937s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0314 01:08:58.396010       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:09:20.488040       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:09:30.047004       1 request.go:655] Throttling request took 1.045607386s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0314 01:09:30.897723       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:09:50.989943       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:10:02.548113       1 request.go:655] Throttling request took 1.048366715s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0314 01:10:03.399641       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:10:21.508817       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:10:35.050184       1 request.go:655] Throttling request took 1.043303273s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0314 01:10:35.901494       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:10:52.010179       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:11:07.551868       1 request.go:655] Throttling request took 1.048328885s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0314 01:11:08.403310       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:11:22.512098       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:11:40.053775       1 request.go:655] Throttling request took 1.047475149s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0314 01:11:40.905256       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0314 01:11:53.014344       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0314 01:12:12.555720       1 request.go:655] Throttling request took 1.048112168s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0314 01:12:13.409987       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [36947fb39456b70b586f2321c1589d336fbf8121fcabf5f669409c9680ddc202] <==
	I0314 01:03:54.689508       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0314 01:03:54.689804       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0314 01:03:54.712712       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0314 01:03:54.713007       1 server_others.go:185] Using iptables Proxier.
	I0314 01:03:54.713689       1 server.go:650] Version: v1.20.0
	I0314 01:03:54.714430       1 config.go:224] Starting endpoint slice config controller
	I0314 01:03:54.714447       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0314 01:03:54.714577       1 config.go:315] Starting service config controller
	I0314 01:03:54.714590       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0314 01:03:54.814534       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0314 01:03:54.814613       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [a649cec0099ad8c4f2f59cb4643ea18e343945d9edac27d0c1542055b2ce681c] <==
	I0314 01:06:33.065342       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0314 01:06:33.065420       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0314 01:06:33.123753       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0314 01:06:33.123845       1 server_others.go:185] Using iptables Proxier.
	I0314 01:06:33.124084       1 server.go:650] Version: v1.20.0
	I0314 01:06:33.124586       1 config.go:315] Starting service config controller
	I0314 01:06:33.124624       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0314 01:06:33.125582       1 config.go:224] Starting endpoint slice config controller
	I0314 01:06:33.125592       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0314 01:06:33.227300       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0314 01:06:33.227363       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [89cecf527c72212c9774ad52dd4fdf563d5e162b2b3c0590daf1a40383e5f87c] <==
	W0314 01:03:34.529473       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 01:03:34.530656       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 01:03:34.530690       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 01:03:34.530696       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 01:03:34.607745       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0314 01:03:34.608625       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 01:03:34.610333       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 01:03:34.610547       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0314 01:03:34.658931       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 01:03:34.659340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 01:03:34.659591       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 01:03:34.659814       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 01:03:34.660057       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 01:03:34.660308       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 01:03:34.660565       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 01:03:34.660858       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 01:03:34.661079       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 01:03:34.661330       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 01:03:34.661570       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 01:03:34.671855       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 01:03:35.510334       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 01:03:35.583820       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 01:03:35.623429       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 01:03:35.720747       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 01:03:36.110698       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [abe11e046f972c9c239dc5503ce7b4e6abdb261d19262a88ebf1157e26888704] <==
	I0314 01:06:24.545939       1 serving.go:331] Generated self-signed cert in-memory
	W0314 01:06:28.918936       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 01:06:28.919063       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 01:06:28.919079       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 01:06:28.919086       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 01:06:29.238675       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0314 01:06:29.240716       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 01:06:29.240734       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 01:06:29.240750       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0314 01:06:29.340818       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 14 01:10:25 old-k8s-version-023742 kubelet[662]: E0314 01:10:25.766011     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:10:29 old-k8s-version-023742 kubelet[662]: E0314 01:10:29.766425     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:10:40 old-k8s-version-023742 kubelet[662]: I0314 01:10:40.765657     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:10:40 old-k8s-version-023742 kubelet[662]: E0314 01:10:40.766016     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:10:43 old-k8s-version-023742 kubelet[662]: E0314 01:10:43.767405     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:10:54 old-k8s-version-023742 kubelet[662]: I0314 01:10:54.765632     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:10:54 old-k8s-version-023742 kubelet[662]: E0314 01:10:54.765962     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:10:55 old-k8s-version-023742 kubelet[662]: E0314 01:10:55.766370     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:11:06 old-k8s-version-023742 kubelet[662]: E0314 01:11:06.766830     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:11:08 old-k8s-version-023742 kubelet[662]: I0314 01:11:08.765654     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:11:08 old-k8s-version-023742 kubelet[662]: E0314 01:11:08.765992     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.766986     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: I0314 01:11:19.769874     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:11:19 old-k8s-version-023742 kubelet[662]: E0314 01:11:19.771307     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.766411     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: I0314 01:11:31.767728     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:11:31 old-k8s-version-023742 kubelet[662]: E0314 01:11:31.768210     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: I0314 01:11:42.765698     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:11:42 old-k8s-version-023742 kubelet[662]: E0314 01:11:42.766039     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:11:45 old-k8s-version-023742 kubelet[662]: E0314 01:11:45.766829     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: I0314 01:11:57.766884     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:11:57 old-k8s-version-023742 kubelet[662]: E0314 01:11:57.767790     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	Mar 14 01:11:59 old-k8s-version-023742 kubelet[662]: E0314 01:11:59.770052     662 pod_workers.go:191] Error syncing pod 320a7d9f-62a0-45d2-b07d-657dc3bd1b28 ("metrics-server-9975d5f86-prnsg_kube-system(320a7d9f-62a0-45d2-b07d-657dc3bd1b28)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 14 01:12:08 old-k8s-version-023742 kubelet[662]: I0314 01:12:08.765688     662 scope.go:95] [topologymanager] RemoveContainer - Container ID: e4fbd91b78323ad88694e984c4cc9e04e2445d86c81406ea301892cae622ebb9
	Mar 14 01:12:08 old-k8s-version-023742 kubelet[662]: E0314 01:12:08.766045     662 pod_workers.go:191] Error syncing pod 1e66610b-54fb-4a50-b7c1-47326c1c1c70 ("dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fm8cx_kubernetes-dashboard(1e66610b-54fb-4a50-b7c1-47326c1c1c70)"
	
	
	==> kubernetes-dashboard [c83ee544525cae8ae275b356afc7ab3e13354a5bfdc13e46d3bb967974cc4fd4] <==
	2024/03/14 01:06:53 Using namespace: kubernetes-dashboard
	2024/03/14 01:06:53 Using in-cluster config to connect to apiserver
	2024/03/14 01:06:53 Using secret token for csrf signing
	2024/03/14 01:06:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/14 01:06:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/14 01:06:53 Successful initial request to the apiserver, version: v1.20.0
	2024/03/14 01:06:53 Generating JWE encryption key
	2024/03/14 01:06:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/14 01:06:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/14 01:06:53 Initializing JWE encryption key from synchronized object
	2024/03/14 01:06:53 Creating in-cluster Sidecar client
	2024/03/14 01:06:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:06:53 Serving insecurely on HTTP port: 9090
	2024/03/14 01:07:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:07:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:08:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:08:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:09:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:09:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:10:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:10:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:11:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:11:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/14 01:06:53 Starting overwatch
	
	
	==> storage-provisioner [0378f91f5f30bc555d7408bc86dd626b1d5ffeb38145d3ee69160c360c8f3416] <==
	I0314 01:07:16.856517       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 01:07:16.883501       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 01:07:16.883718       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 01:07:34.402249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 01:07:34.403528       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023742_56ab5019-fa74-4ba9-b03f-aaf50b0afd68!
	I0314 01:07:34.419734       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8feff7c-5e36-4fc4-849d-81ec2bc2ebd0", APIVersion:"v1", ResourceVersion:"870", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-023742_56ab5019-fa74-4ba9-b03f-aaf50b0afd68 became leader
	I0314 01:07:34.504469       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-023742_56ab5019-fa74-4ba9-b03f-aaf50b0afd68!
	
	
	==> storage-provisioner [4e7dc51c48baacd42a05abe380200f2d6c242433155f704a890e45913c11586c] <==
	I0314 01:06:31.241345       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 01:07:01.243879       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023742 -n old-k8s-version-023742
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-023742 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-prnsg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-023742 describe pod metrics-server-9975d5f86-prnsg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-023742 describe pod metrics-server-9975d5f86-prnsg: exit status 1 (141.589698ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-prnsg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-023742 describe pod metrics-server-9975d5f86-prnsg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.45s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.15
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 11.54
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.09
18 TestDownloadOnly/v1.28.4/DeleteAll 0.2
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 9.73
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.22
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.56
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 118.98
38 TestAddons/parallel/Registry 15.78
40 TestAddons/parallel/InspektorGadget 11.84
41 TestAddons/parallel/MetricsServer 7
44 TestAddons/parallel/CSI 77.53
45 TestAddons/parallel/Headlamp 11.5
46 TestAddons/parallel/CloudSpanner 6.74
47 TestAddons/parallel/LocalPath 9.88
48 TestAddons/parallel/NvidiaDevicePlugin 5.57
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.19
53 TestAddons/StoppedEnableDisable 12.32
54 TestCertOptions 38.25
55 TestCertExpiration 231.14
57 TestForceSystemdFlag 46.42
58 TestForceSystemdEnv 39.85
59 TestDockerEnvContainerd 50.44
64 TestErrorSpam/setup 29.12
65 TestErrorSpam/start 0.77
66 TestErrorSpam/status 1.04
67 TestErrorSpam/pause 1.7
68 TestErrorSpam/unpause 1.79
69 TestErrorSpam/stop 1.48
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 61.37
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.19
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.04
81 TestFunctional/serial/CacheCmd/cache/add_local 1.5
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 43.98
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.79
92 TestFunctional/serial/LogsFileCmd 1.77
93 TestFunctional/serial/InvalidService 4.86
95 TestFunctional/parallel/ConfigCmd 0.52
96 TestFunctional/parallel/DashboardCmd 10.41
97 TestFunctional/parallel/DryRun 0.61
98 TestFunctional/parallel/InternationalLanguage 0.33
99 TestFunctional/parallel/StatusCmd 1.38
103 TestFunctional/parallel/ServiceCmdConnect 8.76
104 TestFunctional/parallel/AddonsCmd 0.23
105 TestFunctional/parallel/PersistentVolumeClaim 26.26
107 TestFunctional/parallel/SSHCmd 0.68
108 TestFunctional/parallel/CpCmd 2.22
110 TestFunctional/parallel/FileSync 0.35
111 TestFunctional/parallel/CertSync 2.41
115 TestFunctional/parallel/NodeLabels 0.15
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.93
119 TestFunctional/parallel/License 0.44
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
122 TestFunctional/parallel/Version/short 0.09
123 TestFunctional/parallel/Version/components 1.34
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.86
132 TestFunctional/parallel/ImageCommands/Setup 1.93
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
146 TestFunctional/parallel/MountCmd/any-port 7.67
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
150 TestFunctional/parallel/MountCmd/specific-port 2.48
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.97
152 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
153 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
154 TestFunctional/parallel/ProfileCmd/profile_list 0.49
155 TestFunctional/parallel/ServiceCmd/List 0.62
156 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
157 TestFunctional/parallel/ServiceCmd/JSONOutput 0.66
158 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
159 TestFunctional/parallel/ServiceCmd/Format 0.5
160 TestFunctional/parallel/ServiceCmd/URL 0.5
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMutliControlPlane/serial/StartCluster 139.07
168 TestMutliControlPlane/serial/DeployApp 16.85
169 TestMutliControlPlane/serial/PingHostFromPods 1.75
170 TestMutliControlPlane/serial/AddWorkerNode 26.47
171 TestMutliControlPlane/serial/NodeLabels 0.12
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.8
173 TestMutliControlPlane/serial/CopyFile 20.4
174 TestMutliControlPlane/serial/StopSecondaryNode 12.84
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
176 TestMutliControlPlane/serial/RestartSecondaryNode 18.61
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.1
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 129.37
179 TestMutliControlPlane/serial/DeleteSecondaryNode 11.28
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMutliControlPlane/serial/StopCluster 36.03
182 TestMutliControlPlane/serial/RestartCluster 67.79
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.54
184 TestMutliControlPlane/serial/AddSecondaryNode 41.48
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
189 TestJSONOutput/start/Command 64.1
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.74
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.68
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.78
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 40.09
215 TestKicCustomNetwork/use_default_bridge_network 34.06
216 TestKicExistingNetwork 36.44
217 TestKicCustomSubnet 33.41
218 TestKicStaticIP 34.23
219 TestMainNoArgs 0.06
220 TestMinikubeProfile 68.01
223 TestMountStart/serial/StartWithMountFirst 7.08
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 9.69
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.63
228 TestMountStart/serial/VerifyMountPostDelete 0.28
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 7.68
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 70.31
235 TestMultiNode/serial/DeployApp2Nodes 5.36
236 TestMultiNode/serial/PingHostFrom2Pods 1.09
237 TestMultiNode/serial/AddNode 18.6
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.35
240 TestMultiNode/serial/CopyFile 10.75
241 TestMultiNode/serial/StopNode 2.33
242 TestMultiNode/serial/StartAfterStop 9.63
243 TestMultiNode/serial/RestartKeepsNodes 80.34
244 TestMultiNode/serial/DeleteNode 5.57
245 TestMultiNode/serial/StopMultiNode 24.04
246 TestMultiNode/serial/RestartMultiNode 45.71
247 TestMultiNode/serial/ValidateNameConflict 33.66
252 TestPreload 125.12
254 TestScheduledStopUnix 107.75
257 TestInsufficientStorage 10.28
258 TestRunningBinaryUpgrade 82.97
260 TestKubernetesUpgrade 381.43
261 TestMissingContainerUpgrade 174.32
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 39.65
265 TestNoKubernetes/serial/StartWithStopK8s 19.9
266 TestNoKubernetes/serial/Start 9.56
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
268 TestNoKubernetes/serial/ProfileList 1.55
269 TestNoKubernetes/serial/Stop 1.24
270 TestNoKubernetes/serial/StartNoArgs 6.63
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
272 TestStoppedBinaryUpgrade/Setup 1.16
273 TestStoppedBinaryUpgrade/Upgrade 104.2
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
283 TestPause/serial/Start 60.57
284 TestPause/serial/SecondStartNoReconfiguration 7.29
285 TestPause/serial/Pause 1.05
286 TestPause/serial/VerifyStatus 0.43
287 TestPause/serial/Unpause 0.82
288 TestPause/serial/PauseAgain 1.1
289 TestPause/serial/DeletePaused 2.93
290 TestPause/serial/VerifyDeletedResources 0.53
298 TestNetworkPlugins/group/false 5.46
303 TestStartStop/group/old-k8s-version/serial/FirstStart 163.42
305 TestStartStop/group/no-preload/serial/FirstStart 82.35
306 TestStartStop/group/old-k8s-version/serial/DeployApp 10.01
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.38
308 TestStartStop/group/old-k8s-version/serial/Stop 12.73
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
311 TestStartStop/group/no-preload/serial/DeployApp 9.42
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
313 TestStartStop/group/no-preload/serial/Stop 12.09
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/no-preload/serial/SecondStart 281.14
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
319 TestStartStop/group/no-preload/serial/Pause 3.72
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/embed-certs/serial/FirstStart 66.76
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
325 TestStartStop/group/old-k8s-version/serial/Pause 3.81
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.65
328 TestStartStop/group/embed-certs/serial/DeployApp 7.42
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
330 TestStartStop/group/embed-certs/serial/Stop 12.09
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
333 TestStartStop/group/embed-certs/serial/SecondStart 290.02
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.55
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.37
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.51
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
341 TestStartStop/group/embed-certs/serial/Pause 3.18
343 TestStartStop/group/newest-cni/serial/FirstStart 45.79
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.17
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.2
348 TestNetworkPlugins/group/auto/Start 70.7
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
351 TestStartStop/group/newest-cni/serial/Stop 1.31
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.3
353 TestStartStop/group/newest-cni/serial/SecondStart 21.8
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
357 TestStartStop/group/newest-cni/serial/Pause 3.73
358 TestNetworkPlugins/group/kindnet/Start 66.75
359 TestNetworkPlugins/group/auto/KubeletFlags 0.4
360 TestNetworkPlugins/group/auto/NetCatPod 10.3
361 TestNetworkPlugins/group/auto/DNS 0.25
362 TestNetworkPlugins/group/auto/Localhost 0.16
363 TestNetworkPlugins/group/auto/HairPin 0.16
364 TestNetworkPlugins/group/calico/Start 77.75
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.36
368 TestNetworkPlugins/group/kindnet/DNS 0.28
369 TestNetworkPlugins/group/kindnet/Localhost 0.25
370 TestNetworkPlugins/group/kindnet/HairPin 0.25
371 TestNetworkPlugins/group/custom-flannel/Start 65.41
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.48
374 TestNetworkPlugins/group/calico/NetCatPod 10.42
375 TestNetworkPlugins/group/calico/DNS 0.25
376 TestNetworkPlugins/group/calico/Localhost 0.17
377 TestNetworkPlugins/group/calico/HairPin 0.37
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.49
380 TestNetworkPlugins/group/enable-default-cni/Start 90.93
381 TestNetworkPlugins/group/custom-flannel/DNS 0.21
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
384 TestNetworkPlugins/group/flannel/Start 64.73
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
392 TestNetworkPlugins/group/flannel/NetCatPod 8.26
393 TestNetworkPlugins/group/flannel/DNS 0.24
394 TestNetworkPlugins/group/flannel/Localhost 0.24
395 TestNetworkPlugins/group/flannel/HairPin 0.22
396 TestNetworkPlugins/group/bridge/Start 46.06
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.25
399 TestNetworkPlugins/group/bridge/DNS 0.17
400 TestNetworkPlugins/group/bridge/Localhost 0.15
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (10.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-455584 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-455584 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.144590705s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-455584
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-455584: exit status 85 (89.388186ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-455584 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |          |
	|         | -p download-only-455584        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:20:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:20:05.017358 1963903 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:20:05.017509 1963903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:05.017518 1963903 out.go:304] Setting ErrFile to fd 2...
	I0314 00:20:05.017523 1963903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:05.017812 1963903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	W0314 00:20:05.017961 1963903 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18375-1958430/.minikube/config/config.json: open /home/jenkins/minikube-integration/18375-1958430/.minikube/config/config.json: no such file or directory
	I0314 00:20:05.018390 1963903 out.go:298] Setting JSON to true
	I0314 00:20:05.019370 1963903 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28955,"bootTime":1710346650,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:20:05.019457 1963903 start.go:139] virtualization:  
	I0314 00:20:05.022384 1963903 out.go:97] [download-only-455584] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 00:20:05.024593 1963903 out.go:169] MINIKUBE_LOCATION=18375
	W0314 00:20:05.022575 1963903 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 00:20:05.022627 1963903 notify.go:220] Checking for updates...
	I0314 00:20:05.029166 1963903 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:20:05.031170 1963903 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:20:05.033535 1963903 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:20:05.035428 1963903 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 00:20:05.039517 1963903 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 00:20:05.039854 1963903 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:20:05.062265 1963903 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:20:05.062378 1963903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:05.134492 1963903 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 00:20:05.12369625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:05.134606 1963903 docker.go:295] overlay module found
	I0314 00:20:05.136516 1963903 out.go:97] Using the docker driver based on user configuration
	I0314 00:20:05.136541 1963903 start.go:297] selected driver: docker
	I0314 00:20:05.136548 1963903 start.go:901] validating driver "docker" against <nil>
	I0314 00:20:05.136696 1963903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:05.191749 1963903 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 00:20:05.182226647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:05.191914 1963903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:20:05.192214 1963903 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 00:20:05.192379 1963903 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:20:05.194528 1963903 out.go:169] Using Docker driver with root privileges
	I0314 00:20:05.196502 1963903 cni.go:84] Creating CNI manager for ""
	I0314 00:20:05.196533 1963903 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:20:05.196543 1963903 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 00:20:05.196627 1963903 start.go:340] cluster config:
	{Name:download-only-455584 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-455584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:20:05.199351 1963903 out.go:97] Starting "download-only-455584" primary control-plane node in "download-only-455584" cluster
	I0314 00:20:05.199377 1963903 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 00:20:05.201926 1963903 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 00:20:05.201953 1963903 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 00:20:05.202063 1963903 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 00:20:05.216937 1963903 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 00:20:05.217143 1963903 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 00:20:05.217251 1963903 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 00:20:05.270926 1963903 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:05.270968 1963903 cache.go:56] Caching tarball of preloaded images
	I0314 00:20:05.271826 1963903 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 00:20:05.275196 1963903 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 00:20:05.275267 1963903 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:05.383701 1963903 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:10.779991 1963903 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:10.780085 1963903 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:10.963620 1963903 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 00:20:11.890425 1963903 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0314 00:20:11.890798 1963903 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/download-only-455584/config.json ...
	I0314 00:20:11.890832 1963903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/download-only-455584/config.json: {Name:mke573b2b41ea3d2c4a8bd20708321cac0aa71db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:20:11.891025 1963903 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0314 00:20:11.891243 1963903 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-455584 host does not exist
	  To start a cluster, run: "minikube start -p download-only-455584"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-455584
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (11.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-540583 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-540583 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.541257028s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (11.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-540583
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-540583: exit status 85 (90.354599ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-455584 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-455584        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-455584        | download-only-455584 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | -o=json --download-only        | download-only-540583 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-540583        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:20:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:20:15.612647 1964065 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:20:15.612811 1964065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:15.612822 1964065 out.go:304] Setting ErrFile to fd 2...
	I0314 00:20:15.612828 1964065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:15.613088 1964065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:20:15.613483 1964065 out.go:298] Setting JSON to true
	I0314 00:20:15.614384 1964065 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28966,"bootTime":1710346650,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:20:15.614462 1964065 start.go:139] virtualization:  
	I0314 00:20:15.617108 1964065 out.go:97] [download-only-540583] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 00:20:15.617410 1964065 notify.go:220] Checking for updates...
	I0314 00:20:15.619244 1964065 out.go:169] MINIKUBE_LOCATION=18375
	I0314 00:20:15.621942 1964065 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:20:15.624628 1964065 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:20:15.626446 1964065 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:20:15.628143 1964065 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 00:20:15.632850 1964065 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 00:20:15.633160 1964065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:20:15.653935 1964065 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:20:15.654059 1964065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:15.725658 1964065 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:15.715860331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:15.725768 1964065 docker.go:295] overlay module found
	I0314 00:20:15.728027 1964065 out.go:97] Using the docker driver based on user configuration
	I0314 00:20:15.728054 1964065 start.go:297] selected driver: docker
	I0314 00:20:15.728062 1964065 start.go:901] validating driver "docker" against <nil>
	I0314 00:20:15.728181 1964065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:15.784551 1964065 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:15.775460228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:15.784729 1964065 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:20:15.785064 1964065 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 00:20:15.785232 1964065 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:20:15.787852 1964065 out.go:169] Using Docker driver with root privileges
	I0314 00:20:15.789777 1964065 cni.go:84] Creating CNI manager for ""
	I0314 00:20:15.789803 1964065 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:20:15.789815 1964065 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 00:20:15.789894 1964065 start.go:340] cluster config:
	{Name:download-only-540583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-540583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:20:15.792345 1964065 out.go:97] Starting "download-only-540583" primary control-plane node in "download-only-540583" cluster
	I0314 00:20:15.792375 1964065 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 00:20:15.795015 1964065 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 00:20:15.795041 1964065 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:20:15.795144 1964065 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 00:20:15.809483 1964065 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 00:20:15.809636 1964065 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 00:20:15.809656 1964065 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 00:20:15.809661 1964065 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 00:20:15.809669 1964065 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 00:20:15.856072 1964065 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:15.856097 1964065 cache.go:56] Caching tarball of preloaded images
	I0314 00:20:15.856279 1964065 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:20:15.858421 1964065 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0314 00:20:15.858446 1964065 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:15.965300 1964065 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:22.259438 1964065 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:22.259564 1964065 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:23.198183 1964065 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0314 00:20:23.198553 1964065 profile.go:142] Saving config to /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/download-only-540583/config.json ...
	I0314 00:20:23.198588 1964065 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/download-only-540583/config.json: {Name:mke419e9b283a3f91894eb2e90e9fa91bd650930 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 00:20:23.199352 1964065 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0314 00:20:23.200001 1964065 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-540583 host does not exist
	  To start a cluster, run: "minikube start -p download-only-540583"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-540583
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (9.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-541047 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-541047 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.732652113s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (9.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-541047
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-541047: exit status 85 (90.056866ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-455584 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-455584           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-455584           | download-only-455584 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | -o=json --download-only           | download-only-540583 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-540583           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| delete  | -p download-only-540583           | download-only-540583 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC | 14 Mar 24 00:20 UTC |
	| start   | -o=json --download-only           | download-only-541047 | jenkins | v1.32.0 | 14 Mar 24 00:20 UTC |                     |
	|         | -p download-only-541047           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 00:20:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 00:20:27.591351 1964228 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:20:27.591485 1964228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:27.591495 1964228 out.go:304] Setting ErrFile to fd 2...
	I0314 00:20:27.591501 1964228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:20:27.591748 1964228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:20:27.592144 1964228 out.go:298] Setting JSON to true
	I0314 00:20:27.592995 1964228 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":28978,"bootTime":1710346650,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:20:27.593063 1964228 start.go:139] virtualization:  
	I0314 00:20:27.595667 1964228 out.go:97] [download-only-541047] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 00:20:27.595879 1964228 notify.go:220] Checking for updates...
	I0314 00:20:27.597855 1964228 out.go:169] MINIKUBE_LOCATION=18375
	I0314 00:20:27.600678 1964228 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:20:27.602913 1964228 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:20:27.604901 1964228 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:20:27.606809 1964228 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 00:20:27.610603 1964228 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 00:20:27.610884 1964228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:20:27.634258 1964228 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:20:27.634368 1964228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:27.701489 1964228 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:27.691101154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:27.701616 1964228 docker.go:295] overlay module found
	I0314 00:20:27.703931 1964228 out.go:97] Using the docker driver based on user configuration
	I0314 00:20:27.703968 1964228 start.go:297] selected driver: docker
	I0314 00:20:27.703976 1964228 start.go:901] validating driver "docker" against <nil>
	I0314 00:20:27.704097 1964228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:20:27.755508 1964228 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 00:20:27.746787167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:20:27.755673 1964228 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 00:20:27.755964 1964228 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 00:20:27.756118 1964228 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 00:20:27.758246 1964228 out.go:169] Using Docker driver with root privileges
	I0314 00:20:27.760421 1964228 cni.go:84] Creating CNI manager for ""
	I0314 00:20:27.760443 1964228 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0314 00:20:27.760455 1964228 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 00:20:27.760538 1964228 start.go:340] cluster config:
	{Name:download-only-541047 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-541047 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0314 00:20:27.762798 1964228 out.go:97] Starting "download-only-541047" primary control-plane node in "download-only-541047" cluster
	I0314 00:20:27.762819 1964228 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0314 00:20:27.764449 1964228 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 00:20:27.764473 1964228 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 00:20:27.764570 1964228 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 00:20:27.778781 1964228 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 00:20:27.778943 1964228 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 00:20:27.778963 1964228 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 00:20:27.778969 1964228 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 00:20:27.778977 1964228 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 00:20:27.826329 1964228 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0314 00:20:27.826353 1964228 cache.go:56] Caching tarball of preloaded images
	I0314 00:20:27.826527 1964228 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0314 00:20:27.829184 1964228 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 00:20:27.829212 1964228 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0314 00:20:27.953043 1964228 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18375-1958430/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-541047 host does not exist
	  To start a cluster, run: "minikube start -p download-only-541047"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-541047
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-069067 --alsologtostderr --binary-mirror http://127.0.0.1:39983 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-069067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-069067
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-122411
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-122411: exit status 85 (82.721658ms)

                                                
                                                
-- stdout --
	* Profile "addons-122411" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-122411"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-122411
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-122411: exit status 85 (84.965824ms)

                                                
                                                
-- stdout --
	* Profile "addons-122411" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-122411"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (118.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-122411 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-122411 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m58.976571213s)
--- PASS: TestAddons/Setup (118.98s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 48.179704ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k7jx6" [aaa61793-7482-468e-9a48-807a12f2eae9] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005845034s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ksf2h" [3c344254-4937-4c56-8655-cc99d0982dbf] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004571437s
addons_test.go:340: (dbg) Run:  kubectl --context addons-122411 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-122411 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-122411 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.585643189s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 ip
2024/03/14 00:22:53 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7k5kk" [61fc4252-8b9b-433d-a282-d92c2d7254f7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004636095s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-122411
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-122411: (5.831876788s)
--- PASS: TestAddons/parallel/InspektorGadget (11.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.211866ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-5qtdh" [fe7998ae-210d-4e08-81f9-3e7f19032943] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007897326s
addons_test.go:415: (dbg) Run:  kubectl --context addons-122411 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 47.400429ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-122411 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-122411 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2c822640-d1eb-49ce-892b-015b1b3ff36e] Pending
helpers_test.go:344: "task-pv-pod" [2c822640-d1eb-49ce-892b-015b1b3ff36e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2c822640-d1eb-49ce-892b-015b1b3ff36e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003752151s
addons_test.go:584: (dbg) Run:  kubectl --context addons-122411 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-122411 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-122411 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-122411 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-122411 delete pod task-pv-pod: (1.073492986s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-122411 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-122411 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-122411 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a6a1eb24-8b31-43a0-833e-c6c0fa799c90] Pending
helpers_test.go:344: "task-pv-pod-restore" [a6a1eb24-8b31-43a0-833e-c6c0fa799c90] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a6a1eb24-8b31-43a0-833e-c6c0fa799c90] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004268511s
addons_test.go:626: (dbg) Run:  kubectl --context addons-122411 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-122411 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-122411 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-122411 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.00137255s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (77.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-122411 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-122411 --alsologtostderr -v=1: (1.495574843s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-wptdw" [aa5123a4-e8aa-4b49-80d2-1fb71112b75b] Pending
helpers_test.go:344: "headlamp-5485c556b-wptdw" [aa5123a4-e8aa-4b49-80d2-1fb71112b75b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-wptdw" [aa5123a4-e8aa-4b49-80d2-1fb71112b75b] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-wptdw" [aa5123a4-e8aa-4b49-80d2-1fb71112b75b] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004497476s
--- PASS: TestAddons/parallel/Headlamp (11.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-9pvwd" [6bcd9ad4-816e-4722-bf0b-85f8326aa772] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003804294s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-122411
--- PASS: TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-122411 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-122411 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1eded964-d2c5-48f7-89e9-91e3f52d6520] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1eded964-d2c5-48f7-89e9-91e3f52d6520] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1eded964-d2c5-48f7-89e9-91e3f52d6520] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003634464s
addons_test.go:891: (dbg) Run:  kubectl --context addons-122411 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 ssh "cat /opt/local-path-provisioner/pvc-83152086-39fc-40cf-a51c-844db25038a8_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-122411 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-122411 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-122411 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-98jl7" [3e3aa7cb-5082-4ad3-bd32-fb855ec98c06] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005092217s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-122411
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-7xh2x" [97ea3893-40e5-4de8-873f-142b8ffd415e] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004767395s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-122411 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-122411 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-122411
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-122411: (12.020444494s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-122411
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-122411
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-122411
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (38.25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-449029 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-449029 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.548384421s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-449029 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-449029 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-449029 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-449029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-449029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-449029: (2.060809648s)
--- PASS: TestCertOptions (38.25s)

                                                
                                    
x
+
TestCertExpiration (231.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-798126 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-798126 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.086836335s)
E0314 01:02:38.371331 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-798126 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-798126 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.689246667s)
helpers_test.go:175: Cleaning up "cert-expiration-798126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-798126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-798126: (2.361854684s)
--- PASS: TestCertExpiration (231.14s)

                                                
                                    
x
+
TestForceSystemdFlag (46.42s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-804124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-804124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.665583309s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-804124 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-804124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-804124
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-804124: (2.411530915s)
--- PASS: TestForceSystemdFlag (46.42s)

                                                
                                    
x
+
TestForceSystemdEnv (39.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-008648 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-008648 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.123801843s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-008648 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-008648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-008648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-008648: (2.33450494s)
--- PASS: TestForceSystemdEnv (39.85s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.44s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-846576 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-846576 --driver=docker  --container-runtime=containerd: (34.044437868s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-846576"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-846576": (1.195535789s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hv5ym1tUwzOK/agent.1980857" SSH_AGENT_PID="1980858" DOCKER_HOST=ssh://docker@127.0.0.1:35046 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hv5ym1tUwzOK/agent.1980857" SSH_AGENT_PID="1980858" DOCKER_HOST=ssh://docker@127.0.0.1:35046 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hv5ym1tUwzOK/agent.1980857" SSH_AGENT_PID="1980858" DOCKER_HOST=ssh://docker@127.0.0.1:35046 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.569209886s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-hv5ym1tUwzOK/agent.1980857" SSH_AGENT_PID="1980858" DOCKER_HOST=ssh://docker@127.0.0.1:35046 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-846576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-846576
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-846576: (1.986760457s)
--- PASS: TestDockerEnvContainerd (50.44s)

                                                
                                    
x
+
TestErrorSpam/setup (29.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-464194 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-464194 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-464194 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-464194 --driver=docker  --container-runtime=containerd: (29.121041975s)
--- PASS: TestErrorSpam/setup (29.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 pause
--- PASS: TestErrorSpam/pause (1.70s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 stop: (1.260725555s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-464194 --log_dir /tmp/nospam-464194 stop
--- PASS: TestErrorSpam/stop (1.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18375-1958430/.minikube/files/etc/test/nested/copy/1963897/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-362954 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m1.369243896s)
--- PASS: TestFunctional/serial/StartWithProxy (61.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-362954 --alsologtostderr -v=8: (6.185615704s)
functional_test.go:659: soft start took 6.192694473s for "functional-362954" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-362954 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:3.1: (1.481016844s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:3.3: (1.293189215s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 cache add registry.k8s.io/pause:latest: (1.261710405s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-362954 /tmp/TestFunctionalserialCacheCmdcacheadd_local1436840203/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache add minikube-local-cache-test:functional-362954
E0314 00:27:38.374047 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:38.380666 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:38.390980 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:38.411294 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:38.451551 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:38.531807 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache delete minikube-local-cache-test:functional-362954
E0314 00:27:38.692672 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-362954
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh sudo crictl images
E0314 00:27:39.013312 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0314 00:27:39.653978 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (307.38724ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cache reload
E0314 00:27:40.934405 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 cache reload: (1.151954836s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 kubectl -- --context functional-362954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-362954 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0314 00:27:43.494935 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:48.615125 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:27:58.856069 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 00:28:19.337222 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-362954 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.983661936s)
functional_test.go:757: restart took 43.983769407s for "functional-362954" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-362954 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 logs: (1.785231564s)
--- PASS: TestFunctional/serial/LogsCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 logs --file /tmp/TestFunctionalserialLogsFileCmd2012030214/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 logs --file /tmp/TestFunctionalserialLogsFileCmd2012030214/001/logs.txt: (1.772983112s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.86s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-362954 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-362954
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-362954: exit status 115 (624.14896ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32533 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-362954 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 config get cpus: exit status 14 (75.149115ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 config get cpus: exit status 14 (100.230444ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-362954 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-362954 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1996907: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-362954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (249.331139ms)

                                                
                                                
-- stdout --
	* [functional-362954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:29:20.298124 1996099 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:29:20.298324 1996099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:29:20.298345 1996099 out.go:304] Setting ErrFile to fd 2...
	I0314 00:29:20.298362 1996099 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:29:20.298629 1996099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:29:20.299167 1996099 out.go:298] Setting JSON to false
	I0314 00:29:20.300258 1996099 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29511,"bootTime":1710346650,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:29:20.300360 1996099 start.go:139] virtualization:  
	I0314 00:29:20.304926 1996099 out.go:177] * [functional-362954] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 00:29:20.306895 1996099 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:29:20.308504 1996099 notify.go:220] Checking for updates...
	I0314 00:29:20.317148 1996099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:29:20.319953 1996099 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:29:20.321878 1996099 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:29:20.323709 1996099 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 00:29:20.325284 1996099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:29:20.327791 1996099 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:29:20.328342 1996099 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:29:20.356508 1996099 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:29:20.356641 1996099 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:29:20.448271 1996099 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 00:29:20.438854731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:29:20.448380 1996099 docker.go:295] overlay module found
	I0314 00:29:20.450520 1996099 out.go:177] * Using the docker driver based on existing profile
	I0314 00:29:20.452316 1996099 start.go:297] selected driver: docker
	I0314 00:29:20.452336 1996099 start.go:901] validating driver "docker" against &{Name:functional-362954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-362954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:29:20.452451 1996099 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:29:20.454866 1996099 out.go:177] 
	W0314 00:29:20.456739 1996099 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0314 00:29:20.458447 1996099 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-362954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-362954 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (328.941268ms)

                                                
                                                
-- stdout --
	* [functional-362954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:29:22.216405 1996562 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:29:22.216602 1996562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:29:22.216614 1996562 out.go:304] Setting ErrFile to fd 2...
	I0314 00:29:22.216627 1996562 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:29:22.217998 1996562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:29:22.218440 1996562 out.go:298] Setting JSON to false
	I0314 00:29:22.219532 1996562 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29513,"bootTime":1710346650,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 00:29:22.219614 1996562 start.go:139] virtualization:  
	I0314 00:29:22.222119 1996562 out.go:177] * [functional-362954] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0314 00:29:22.224739 1996562 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 00:29:22.226515 1996562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 00:29:22.224780 1996562 notify.go:220] Checking for updates...
	I0314 00:29:22.228257 1996562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 00:29:22.230238 1996562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 00:29:22.232030 1996562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 00:29:22.234199 1996562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 00:29:22.236931 1996562 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:29:22.237428 1996562 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 00:29:22.269765 1996562 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 00:29:22.270028 1996562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:29:22.396974 1996562 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 00:29:22.387615891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:29:22.397086 1996562 docker.go:295] overlay module found
	I0314 00:29:22.400889 1996562 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0314 00:29:22.403017 1996562 start.go:297] selected driver: docker
	I0314 00:29:22.403034 1996562 start.go:901] validating driver "docker" against &{Name:functional-362954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-362954 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 00:29:22.403338 1996562 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 00:29:22.409164 1996562 out.go:177] 
	W0314 00:29:22.411678 1996562 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0314 00:29:22.413823 1996562 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-362954 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-362954 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-kjlvr" [c52a82c0-404a-43e0-8c8a-a8a3893156a5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-kjlvr" [c52a82c0-404a-43e0-8c8a-a8a3893156a5] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.015122757s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30088
functional_test.go:1671: http://192.168.49.2:30088: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-kjlvr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30088
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b21a5909-77ed-42ff-8122-29d4ab125052] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005050622s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-362954 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-362954 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-362954 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-362954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [da1958b0-a216-47ef-bc53-e5f748f6150a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [da1958b0-a216-47ef-bc53-e5f748f6150a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003462178s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-362954 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-362954 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-362954 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [14de33a0-65bc-42ec-a2f2-be0322067f64] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [14de33a0-65bc-42ec-a2f2-be0322067f64] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003694465s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-362954 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.26s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh -n functional-362954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cp functional-362954:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd636070426/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh -n functional-362954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh -n functional-362954 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1963897/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /etc/test/nested/copy/1963897/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1963897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /etc/ssl/certs/1963897.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1963897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /usr/share/ca-certificates/1963897.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19638972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /etc/ssl/certs/19638972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19638972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /usr/share/ca-certificates/19638972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-362954 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh "sudo systemctl is-active docker": exit status 1 (543.570708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh "sudo systemctl is-active crio": exit status 1 (388.647476ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1992181: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 version -o=json --components: (1.335281192s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-362954 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c5b6a33d-9c06-400f-922b-ee5c58d50638] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c5b6a33d-9c06-400f-922b-ee5c58d50638] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004240742s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-362954 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-362954
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-362954 image ls --format short --alsologtostderr:
I0314 00:29:24.679490 1996944 out.go:291] Setting OutFile to fd 1 ...
I0314 00:29:24.679687 1996944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:24.679698 1996944 out.go:304] Setting ErrFile to fd 2...
I0314 00:29:24.679704 1996944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:24.679979 1996944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
I0314 00:29:24.680759 1996944 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:24.681064 1996944 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:24.681637 1996944 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
I0314 00:29:24.699931 1996944 ssh_runner.go:195] Run: systemctl --version
I0314 00:29:24.699998 1996944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
I0314 00:29:24.716089 1996944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
I0314 00:29:24.811894 1996944 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-362954 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-362954  | sha256:3053dc | 1kB    |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| localhost/my-image                          | functional-362954  | sha256:7264bf | 831kB  |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/nginx                     | latest             | sha256:070027 | 67.2MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-362954 image ls --format table --alsologtostderr:
I0314 00:29:28.309822 1997420 out.go:291] Setting OutFile to fd 1 ...
I0314 00:29:28.310142 1997420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:28.310251 1997420 out.go:304] Setting ErrFile to fd 2...
I0314 00:29:28.310291 1997420 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:28.310642 1997420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
I0314 00:29:28.311620 1997420 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:28.311951 1997420 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:28.312689 1997420 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
I0314 00:29:28.334326 1997420 ssh_runner.go:195] Run: systemctl --version
I0314 00:29:28.334382 1997420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
I0314 00:29:28.359295 1997420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
I0314 00:29:28.461521 1997420 ssh_runner.go:195] Run: sudo crictl images --output json
2024/03/14 00:29:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-362954 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":["docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7
efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216851"},{"id":"sha256:7264bf588448fca04f076ea297a1bb89e8507f5aa851146e2e05b1b61f774e84","repoDigests":[],"repoTags":["localhost/my-image:functional-362954"],"size":"830617"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96
"],"size":"25324029"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480
cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:3053dc9d6c1998ae0f10af1be2ae3396f19c7da983a6528ef7afc9b1eea5a485","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-362954"],"size":"1005"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02
d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-362954 image ls --format json --alsologtostderr:
I0314 00:29:28.040596 1997394 out.go:291] Setting OutFile to fd 1 ...
I0314 00:29:28.040753 1997394 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:28.040763 1997394 out.go:304] Setting ErrFile to fd 2...
I0314 00:29:28.040769 1997394 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:28.042033 1997394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
I0314 00:29:28.043408 1997394 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:28.043534 1997394 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:28.044094 1997394 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
I0314 00:29:28.060640 1997394 ssh_runner.go:195] Run: systemctl --version
I0314 00:29:28.060698 1997394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
I0314 00:29:28.099588 1997394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
I0314 00:29:28.196375 1997394 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-362954 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "67216851"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:3053dc9d6c1998ae0f10af1be2ae3396f19c7da983a6528ef7afc9b1eea5a485
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-362954
size: "1005"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-362954 image ls --format yaml --alsologtostderr:
I0314 00:29:24.925439 1996970 out.go:291] Setting OutFile to fd 1 ...
I0314 00:29:24.925614 1996970 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:24.925642 1996970 out.go:304] Setting ErrFile to fd 2...
I0314 00:29:24.925662 1996970 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:24.925936 1996970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
I0314 00:29:24.926643 1996970 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:24.926807 1996970 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:24.927353 1996970 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
I0314 00:29:24.943299 1996970 ssh_runner.go:195] Run: systemctl --version
I0314 00:29:24.943365 1996970 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
I0314 00:29:24.959353 1996970 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
I0314 00:29:25.055970 1996970 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh pgrep buildkitd: exit status 1 (283.575972ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image build -t localhost/my-image:functional-362954 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 image build -t localhost/my-image:functional-362954 testdata/build --alsologtostderr: (2.287387714s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-362954 image build -t localhost/my-image:functional-362954 testdata/build --alsologtostderr:
I0314 00:29:25.461240 1997045 out.go:291] Setting OutFile to fd 1 ...
I0314 00:29:25.462254 1997045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:25.462270 1997045 out.go:304] Setting ErrFile to fd 2...
I0314 00:29:25.462276 1997045 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 00:29:25.462583 1997045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
I0314 00:29:25.463423 1997045 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:25.464086 1997045 config.go:182] Loaded profile config "functional-362954": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0314 00:29:25.464604 1997045 cli_runner.go:164] Run: docker container inspect functional-362954 --format={{.State.Status}}
I0314 00:29:25.481751 1997045 ssh_runner.go:195] Run: systemctl --version
I0314 00:29:25.481815 1997045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-362954
I0314 00:29:25.499644 1997045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35056 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/functional-362954/id_rsa Username:docker}
I0314 00:29:25.595670 1997045 build_images.go:161] Building image from path: /tmp/build.4127622262.tar
I0314 00:29:25.595787 1997045 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0314 00:29:25.605146 1997045 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4127622262.tar
I0314 00:29:25.608638 1997045 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4127622262.tar: stat -c "%s %y" /var/lib/minikube/build/build.4127622262.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4127622262.tar': No such file or directory
I0314 00:29:25.608669 1997045 ssh_runner.go:362] scp /tmp/build.4127622262.tar --> /var/lib/minikube/build/build.4127622262.tar (3072 bytes)
I0314 00:29:25.638255 1997045 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4127622262
I0314 00:29:25.647238 1997045 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4127622262 -xf /var/lib/minikube/build/build.4127622262.tar
I0314 00:29:25.656633 1997045 containerd.go:379] Building image: /var/lib/minikube/build/build.4127622262
I0314 00:29:25.656709 1997045 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4127622262 --local dockerfile=/var/lib/minikube/build/build.4127622262 --output type=image,name=localhost/my-image:functional-362954
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:fae5788436194caf673f373aa12047b7881ccada5783b5874ad5320772df147c
#8 exporting manifest sha256:fae5788436194caf673f373aa12047b7881ccada5783b5874ad5320772df147c 0.0s done
#8 exporting config sha256:7264bf588448fca04f076ea297a1bb89e8507f5aa851146e2e05b1b61f774e84 0.0s done
#8 naming to localhost/my-image:functional-362954 done
#8 DONE 0.1s
I0314 00:29:27.645262 1997045 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4127622262 --local dockerfile=/var/lib/minikube/build/build.4127622262 --output type=image,name=localhost/my-image:functional-362954: (1.988519899s)
I0314 00:29:27.645346 1997045 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4127622262
I0314 00:29:27.655135 1997045 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4127622262.tar
I0314 00:29:27.668682 1997045 build_images.go:217] Built localhost/my-image:functional-362954 from /tmp/build.4127622262.tar
I0314 00:29:27.668711 1997045 build_images.go:133] succeeded building to: functional-362954
I0314 00:29:27.668716 1997045 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.90951603s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-362954
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-362954 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.242.165 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-362954 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdany-port3911933600/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710376130875272719" to /tmp/TestFunctionalparallelMountCmdany-port3911933600/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710376130875272719" to /tmp/TestFunctionalparallelMountCmdany-port3911933600/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710376130875272719" to /tmp/TestFunctionalparallelMountCmdany-port3911933600/001/test-1710376130875272719
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (524.857414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 14 00:28 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 14 00:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 14 00:28 test-1710376130875272719
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh cat /mount-9p/test-1710376130875272719
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-362954 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [94900ea8-b1a1-4c5c-a0a9-04d6635b6b57] Pending
helpers_test.go:344: "busybox-mount" [94900ea8-b1a1-4c5c-a0a9-04d6635b6b57] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [94900ea8-b1a1-4c5c-a0a9-04d6635b6b57] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [94900ea8-b1a1-4c5c-a0a9-04d6635b6b57] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004281991s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-362954 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdany-port3911933600/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image rm gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-362954
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 image save --daemon gcr.io/google-containers/addon-resizer:functional-362954 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-362954
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdspecific-port216263518/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (447.465782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh -- ls -la /mount-9p
E0314 00:29:00.313216 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdspecific-port216263518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-362954 ssh "sudo umount -f /mount-9p": exit status 1 (353.055973ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-362954 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdspecific-port216263518/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T" /mount1: (1.157889907s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-362954 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-362954 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4000178358/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-362954 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-362954 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-cr88l" [e742252c-d17c-434d-aeba-cfc4844292a4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-cr88l" [e742252c-d17c-434d-aeba-cfc4844292a4] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004811084s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "404.765689ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "85.789172ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "424.621424ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "72.444196ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service list -o json
functional_test.go:1490: Took "660.490221ms" to run "out/minikube-linux-arm64 -p functional-362954 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32007
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-362954 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32007
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-362954
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-362954
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-362954
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (139.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-162611 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0314 00:30:22.234430 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-162611 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m18.212561291s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (139.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (16.85s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-162611 -- rollout status deployment/busybox: (13.621518072s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-278gh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-86s2m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-n2kfw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-278gh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-86s2m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-n2kfw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-278gh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-86s2m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-n2kfw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (16.85s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-278gh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-278gh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-86s2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-86s2m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-n2kfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-162611 -- exec busybox-5b5d89c9d6-n2kfw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (26.47s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-162611 -v=7 --alsologtostderr
E0314 00:32:38.370654 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-162611 -v=7 --alsologtostderr: (25.39760178s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr: (1.072007199s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (26.47s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-162611 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (20.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 status --output json -v=7 --alsologtostderr: (1.021475329s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp testdata/cp-test.txt ha-162611:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1615434225/001/cp-test_ha-162611.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611:/home/docker/cp-test.txt ha-162611-m02:/home/docker/cp-test_ha-162611_ha-162611-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test_ha-162611_ha-162611-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611:/home/docker/cp-test.txt ha-162611-m03:/home/docker/cp-test_ha-162611_ha-162611-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test_ha-162611_ha-162611-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611:/home/docker/cp-test.txt ha-162611-m04:/home/docker/cp-test_ha-162611_ha-162611-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test_ha-162611_ha-162611-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp testdata/cp-test.txt ha-162611-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1615434225/001/cp-test_ha-162611-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m02:/home/docker/cp-test.txt ha-162611:/home/docker/cp-test_ha-162611-m02_ha-162611.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test_ha-162611-m02_ha-162611.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m02:/home/docker/cp-test.txt ha-162611-m03:/home/docker/cp-test_ha-162611-m02_ha-162611-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test_ha-162611-m02_ha-162611-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m02:/home/docker/cp-test.txt ha-162611-m04:/home/docker/cp-test_ha-162611-m02_ha-162611-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test_ha-162611-m02_ha-162611-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp testdata/cp-test.txt ha-162611-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1615434225/001/cp-test_ha-162611-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m03:/home/docker/cp-test.txt ha-162611:/home/docker/cp-test_ha-162611-m03_ha-162611.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test_ha-162611-m03_ha-162611.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m03:/home/docker/cp-test.txt ha-162611-m02:/home/docker/cp-test_ha-162611-m03_ha-162611-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test_ha-162611-m03_ha-162611-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m03:/home/docker/cp-test.txt ha-162611-m04:/home/docker/cp-test_ha-162611-m03_ha-162611-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test_ha-162611-m03_ha-162611-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp testdata/cp-test.txt ha-162611-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile1615434225/001/cp-test_ha-162611-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m04:/home/docker/cp-test.txt ha-162611:/home/docker/cp-test_ha-162611-m04_ha-162611.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611 "sudo cat /home/docker/cp-test_ha-162611-m04_ha-162611.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m04:/home/docker/cp-test.txt ha-162611-m02:/home/docker/cp-test_ha-162611-m04_ha-162611-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m02 "sudo cat /home/docker/cp-test_ha-162611-m04_ha-162611-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 cp ha-162611-m04:/home/docker/cp-test.txt ha-162611-m03:/home/docker/cp-test_ha-162611-m04_ha-162611-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 ssh -n ha-162611-m03 "sudo cat /home/docker/cp-test_ha-162611-m04_ha-162611-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (20.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 node stop m02 -v=7 --alsologtostderr
E0314 00:33:06.074665 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 node stop m02 -v=7 --alsologtostderr: (12.085330796s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr: exit status 7 (753.334203ms)

                                                
                                                
-- stdout --
	ha-162611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162611-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162611-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-162611-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:33:13.150125 2012618 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:33:13.150373 2012618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:33:13.150385 2012618 out.go:304] Setting ErrFile to fd 2...
	I0314 00:33:13.150392 2012618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:33:13.150656 2012618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:33:13.150851 2012618 out.go:298] Setting JSON to false
	I0314 00:33:13.150885 2012618 mustload.go:65] Loading cluster: ha-162611
	I0314 00:33:13.150965 2012618 notify.go:220] Checking for updates...
	I0314 00:33:13.152257 2012618 config.go:182] Loaded profile config "ha-162611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:33:13.152285 2012618 status.go:255] checking status of ha-162611 ...
	I0314 00:33:13.152983 2012618 cli_runner.go:164] Run: docker container inspect ha-162611 --format={{.State.Status}}
	I0314 00:33:13.172477 2012618 status.go:330] ha-162611 host status = "Running" (err=<nil>)
	I0314 00:33:13.172501 2012618 host.go:66] Checking if "ha-162611" exists ...
	I0314 00:33:13.172788 2012618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-162611
	I0314 00:33:13.191583 2012618 host.go:66] Checking if "ha-162611" exists ...
	I0314 00:33:13.191958 2012618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:33:13.192005 2012618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-162611
	I0314 00:33:13.220874 2012618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35061 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/ha-162611/id_rsa Username:docker}
	I0314 00:33:13.320760 2012618 ssh_runner.go:195] Run: systemctl --version
	I0314 00:33:13.325315 2012618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:33:13.336492 2012618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:33:13.394810 2012618 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-14 00:33:13.384849786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:33:13.395482 2012618 kubeconfig.go:125] found "ha-162611" server: "https://192.168.49.254:8443"
	I0314 00:33:13.395502 2012618 api_server.go:166] Checking apiserver status ...
	I0314 00:33:13.395554 2012618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:33:13.413235 2012618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I0314 00:33:13.423312 2012618 api_server.go:182] apiserver freezer: "5:freezer:/docker/23a5dbd5b853ab7db171ad22f4cbfda60f0fb82049b48351488cef2e4624b60e/kubepods/burstable/pod1a331f50b0d52739ae2a426f88192ecf/ef920a95eff8a19a4296500d185a548719abe305af21b67f8054250f73ccffd6"
	I0314 00:33:13.423384 2012618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/23a5dbd5b853ab7db171ad22f4cbfda60f0fb82049b48351488cef2e4624b60e/kubepods/burstable/pod1a331f50b0d52739ae2a426f88192ecf/ef920a95eff8a19a4296500d185a548719abe305af21b67f8054250f73ccffd6/freezer.state
	I0314 00:33:13.432862 2012618 api_server.go:204] freezer state: "THAWED"
	I0314 00:33:13.432887 2012618 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 00:33:13.442838 2012618 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 00:33:13.442868 2012618 status.go:422] ha-162611 apiserver status = Running (err=<nil>)
	I0314 00:33:13.442880 2012618 status.go:257] ha-162611 status: &{Name:ha-162611 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:33:13.442926 2012618 status.go:255] checking status of ha-162611-m02 ...
	I0314 00:33:13.443280 2012618 cli_runner.go:164] Run: docker container inspect ha-162611-m02 --format={{.State.Status}}
	I0314 00:33:13.461424 2012618 status.go:330] ha-162611-m02 host status = "Stopped" (err=<nil>)
	I0314 00:33:13.461444 2012618 status.go:343] host is not running, skipping remaining checks
	I0314 00:33:13.461452 2012618 status.go:257] ha-162611-m02 status: &{Name:ha-162611-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:33:13.461484 2012618 status.go:255] checking status of ha-162611-m03 ...
	I0314 00:33:13.461787 2012618 cli_runner.go:164] Run: docker container inspect ha-162611-m03 --format={{.State.Status}}
	I0314 00:33:13.482065 2012618 status.go:330] ha-162611-m03 host status = "Running" (err=<nil>)
	I0314 00:33:13.482089 2012618 host.go:66] Checking if "ha-162611-m03" exists ...
	I0314 00:33:13.482396 2012618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-162611-m03
	I0314 00:33:13.498849 2012618 host.go:66] Checking if "ha-162611-m03" exists ...
	I0314 00:33:13.499162 2012618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:33:13.499246 2012618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-162611-m03
	I0314 00:33:13.515712 2012618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35071 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/ha-162611-m03/id_rsa Username:docker}
	I0314 00:33:13.612593 2012618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:33:13.625013 2012618 kubeconfig.go:125] found "ha-162611" server: "https://192.168.49.254:8443"
	I0314 00:33:13.625049 2012618 api_server.go:166] Checking apiserver status ...
	I0314 00:33:13.625123 2012618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:33:13.637346 2012618 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1331/cgroup
	I0314 00:33:13.647075 2012618 api_server.go:182] apiserver freezer: "5:freezer:/docker/c483cb1914ed25f519b8c2fcefdbfcf3bc5c6d4d98fb34c6b5003242413ed8ec/kubepods/burstable/pod8f22912782eac27e406c010fde2473bd/21b23d4d5d58e5d398090a6a0287c05defa829d5b0186458fa67e4cb89fe6413"
	I0314 00:33:13.647149 2012618 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c483cb1914ed25f519b8c2fcefdbfcf3bc5c6d4d98fb34c6b5003242413ed8ec/kubepods/burstable/pod8f22912782eac27e406c010fde2473bd/21b23d4d5d58e5d398090a6a0287c05defa829d5b0186458fa67e4cb89fe6413/freezer.state
	I0314 00:33:13.656490 2012618 api_server.go:204] freezer state: "THAWED"
	I0314 00:33:13.656520 2012618 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 00:33:13.665179 2012618 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 00:33:13.665209 2012618 status.go:422] ha-162611-m03 apiserver status = Running (err=<nil>)
	I0314 00:33:13.665221 2012618 status.go:257] ha-162611-m03 status: &{Name:ha-162611-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:33:13.665242 2012618 status.go:255] checking status of ha-162611-m04 ...
	I0314 00:33:13.665540 2012618 cli_runner.go:164] Run: docker container inspect ha-162611-m04 --format={{.State.Status}}
	I0314 00:33:13.682890 2012618 status.go:330] ha-162611-m04 host status = "Running" (err=<nil>)
	I0314 00:33:13.682916 2012618 host.go:66] Checking if "ha-162611-m04" exists ...
	I0314 00:33:13.683277 2012618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-162611-m04
	I0314 00:33:13.699021 2012618 host.go:66] Checking if "ha-162611-m04" exists ...
	I0314 00:33:13.699415 2012618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:33:13.699476 2012618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-162611-m04
	I0314 00:33:13.715908 2012618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35076 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/ha-162611-m04/id_rsa Username:docker}
	I0314 00:33:13.816486 2012618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:33:13.828236 2012618 status.go:257] ha-162611-m04 status: &{Name:ha-162611-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (18.61s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 node start m02 -v=7 --alsologtostderr: (17.479460852s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr: (1.014514049s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (18.61s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.100848066s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (129.37s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-162611 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-162611 -v=7 --alsologtostderr
E0314 00:33:35.417906 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.423293 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.433763 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.454832 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.497335 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.577660 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:35.738038 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:36.058444 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:36.699480 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:37.979684 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:40.539919 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:45.660190 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:33:55.900461 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-162611 -v=7 --alsologtostderr: (26.436591999s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-162611 --wait=true -v=7 --alsologtostderr
E0314 00:34:16.381315 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:34:57.342126 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-162611 --wait=true -v=7 --alsologtostderr: (1m42.753934599s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-162611
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (129.37s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 node delete m03 -v=7 --alsologtostderr: (10.346062161s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 stop -v=7 --alsologtostderr
E0314 00:36:19.262821 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 stop -v=7 --alsologtostderr: (35.92125262s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr: exit status 7 (111.851974ms)

                                                
                                                
-- stdout --
	ha-162611
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162611-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-162611-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:36:31.290131 2025749 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:36:31.290252 2025749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:36:31.290261 2025749 out.go:304] Setting ErrFile to fd 2...
	I0314 00:36:31.290267 2025749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:36:31.290510 2025749 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:36:31.290699 2025749 out.go:298] Setting JSON to false
	I0314 00:36:31.290738 2025749 mustload.go:65] Loading cluster: ha-162611
	I0314 00:36:31.290861 2025749 notify.go:220] Checking for updates...
	I0314 00:36:31.291127 2025749 config.go:182] Loaded profile config "ha-162611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:36:31.291138 2025749 status.go:255] checking status of ha-162611 ...
	I0314 00:36:31.291665 2025749 cli_runner.go:164] Run: docker container inspect ha-162611 --format={{.State.Status}}
	I0314 00:36:31.308649 2025749 status.go:330] ha-162611 host status = "Stopped" (err=<nil>)
	I0314 00:36:31.308673 2025749 status.go:343] host is not running, skipping remaining checks
	I0314 00:36:31.308680 2025749 status.go:257] ha-162611 status: &{Name:ha-162611 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:36:31.308708 2025749 status.go:255] checking status of ha-162611-m02 ...
	I0314 00:36:31.309011 2025749 cli_runner.go:164] Run: docker container inspect ha-162611-m02 --format={{.State.Status}}
	I0314 00:36:31.324460 2025749 status.go:330] ha-162611-m02 host status = "Stopped" (err=<nil>)
	I0314 00:36:31.324482 2025749 status.go:343] host is not running, skipping remaining checks
	I0314 00:36:31.324489 2025749 status.go:257] ha-162611-m02 status: &{Name:ha-162611-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:36:31.324509 2025749 status.go:255] checking status of ha-162611-m04 ...
	I0314 00:36:31.324805 2025749 cli_runner.go:164] Run: docker container inspect ha-162611-m04 --format={{.State.Status}}
	I0314 00:36:31.340078 2025749 status.go:330] ha-162611-m04 host status = "Stopped" (err=<nil>)
	I0314 00:36:31.340099 2025749 status.go:343] host is not running, skipping remaining checks
	I0314 00:36:31.340105 2025749 status.go:257] ha-162611-m04 status: &{Name:ha-162611-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (67.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-162611 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-162611 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.716468424s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
E0314 00:37:38.371655 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (67.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (41.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-162611 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-162611 --control-plane -v=7 --alsologtostderr: (40.390191145s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-162611 status -v=7 --alsologtostderr: (1.085072953s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (41.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-176838 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0314 00:38:35.418084 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
E0314 00:39:03.103673 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-176838 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m4.09197316s)
--- PASS: TestJSONOutput/start/Command (64.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-176838 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-176838 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-176838 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-176838 --output=json --user=testUser: (5.778588198s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-078410 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-078410 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.302294ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9419a2ab-f820-45a5-a006-b430d0dd43ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-078410] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"595dfa44-c27f-4b39-bcbe-20f842165cde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18375"}}
	{"specversion":"1.0","id":"7b4d7a80-b8cd-4d3b-9462-b3d1511b93a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c0570421-0d62-45c4-abda-3aad9ca4ceed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig"}}
	{"specversion":"1.0","id":"b03a7410-182a-4fbe-ac4b-4a838d80a649","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube"}}
	{"specversion":"1.0","id":"021b804e-c554-4343-b88d-b59d34edfdbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7dde886f-727d-42e6-bc50-c37fbb54bd36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"660bbd38-940d-40bc-a35f-b1bce1070869","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-078410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-078410
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-283195 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-283195 --network=: (37.961468203s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-283195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-283195
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-283195: (2.107508084s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-043255 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-043255 --network=bridge: (32.090228375s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-043255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-043255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-043255: (1.951940545s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.06s)

                                                
                                    
x
+
TestKicExistingNetwork (36.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-687278 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-687278 --network=existing-network: (34.281230779s)
helpers_test.go:175: Cleaning up "existing-network-687278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-687278
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-687278: (2.019390966s)
--- PASS: TestKicExistingNetwork (36.44s)

                                                
                                    
x
+
TestKicCustomSubnet (33.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-097107 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-097107 --subnet=192.168.60.0/24: (31.246075356s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-097107 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-097107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-097107
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-097107: (2.136764296s)
--- PASS: TestKicCustomSubnet (33.41s)

                                                
                                    
x
+
TestKicStaticIP (34.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-064074 --static-ip=192.168.200.200
E0314 00:42:38.371444 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-064074 --static-ip=192.168.200.200: (31.939030024s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-064074 ip
helpers_test.go:175: Cleaning up "static-ip-064074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-064074
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-064074: (2.134735435s)
--- PASS: TestKicStaticIP (34.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-576861 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-576861 --driver=docker  --container-runtime=containerd: (30.676396679s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-580102 --driver=docker  --container-runtime=containerd
E0314 00:43:35.417910 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-580102 --driver=docker  --container-runtime=containerd: (31.818964809s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-576861
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-580102
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-580102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-580102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-580102: (1.937969802s)
helpers_test.go:175: Cleaning up "first-576861" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-576861
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-576861: (2.280690696s)
--- PASS: TestMinikubeProfile (68.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-399541 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-399541 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.07796038s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-399541 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-413268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E0314 00:44:01.435363 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-413268 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.68692454s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-413268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-399541 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-399541 --alsologtostderr -v=5: (1.633581002s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-413268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-413268
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-413268: (1.202227235s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.68s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-413268
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-413268: (6.679306865s)
--- PASS: TestMountStart/serial/RestartStopped (7.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-413268 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-722594 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-722594 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m9.730115722s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-722594 -- rollout status deployment/busybox: (3.251885467s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-44phb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-gnrj7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-44phb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-gnrj7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-44phb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-gnrj7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-44phb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-44phb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-gnrj7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-722594 -- exec busybox-5b5d89c9d6-gnrj7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-722594 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-722594 -v 3 --alsologtostderr: (17.919256073s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.60s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-722594 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp testdata/cp-test.txt multinode-722594:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3431165542/001/cp-test_multinode-722594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594:/home/docker/cp-test.txt multinode-722594-m02:/home/docker/cp-test_multinode-722594_multinode-722594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test_multinode-722594_multinode-722594-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594:/home/docker/cp-test.txt multinode-722594-m03:/home/docker/cp-test_multinode-722594_multinode-722594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test_multinode-722594_multinode-722594-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp testdata/cp-test.txt multinode-722594-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3431165542/001/cp-test_multinode-722594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m02:/home/docker/cp-test.txt multinode-722594:/home/docker/cp-test_multinode-722594-m02_multinode-722594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test_multinode-722594-m02_multinode-722594.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m02:/home/docker/cp-test.txt multinode-722594-m03:/home/docker/cp-test_multinode-722594-m02_multinode-722594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test_multinode-722594-m02_multinode-722594-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp testdata/cp-test.txt multinode-722594-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3431165542/001/cp-test_multinode-722594-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m03:/home/docker/cp-test.txt multinode-722594:/home/docker/cp-test_multinode-722594-m03_multinode-722594.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594 "sudo cat /home/docker/cp-test_multinode-722594-m03_multinode-722594.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 cp multinode-722594-m03:/home/docker/cp-test.txt multinode-722594-m02:/home/docker/cp-test_multinode-722594-m03_multinode-722594-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 ssh -n multinode-722594-m02 "sudo cat /home/docker/cp-test_multinode-722594-m03_multinode-722594-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-722594 node stop m03: (1.216131351s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-722594 status: exit status 7 (541.61608ms)

                                                
                                                
-- stdout --
	multinode-722594
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-722594-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-722594-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr: exit status 7 (573.142453ms)

                                                
                                                
-- stdout --
	multinode-722594
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-722594-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-722594-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:46:09.850338 2077280 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:46:09.850456 2077280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:46:09.850481 2077280 out.go:304] Setting ErrFile to fd 2...
	I0314 00:46:09.850489 2077280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:46:09.850759 2077280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:46:09.850950 2077280 out.go:298] Setting JSON to false
	I0314 00:46:09.850978 2077280 mustload.go:65] Loading cluster: multinode-722594
	I0314 00:46:09.851039 2077280 notify.go:220] Checking for updates...
	I0314 00:46:09.851402 2077280 config.go:182] Loaded profile config "multinode-722594": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:46:09.851417 2077280 status.go:255] checking status of multinode-722594 ...
	I0314 00:46:09.852254 2077280 cli_runner.go:164] Run: docker container inspect multinode-722594 --format={{.State.Status}}
	I0314 00:46:09.870902 2077280 status.go:330] multinode-722594 host status = "Running" (err=<nil>)
	I0314 00:46:09.870929 2077280 host.go:66] Checking if "multinode-722594" exists ...
	I0314 00:46:09.871283 2077280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-722594
	I0314 00:46:09.887012 2077280 host.go:66] Checking if "multinode-722594" exists ...
	I0314 00:46:09.887378 2077280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:46:09.887449 2077280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-722594
	I0314 00:46:09.912698 2077280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35181 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/multinode-722594/id_rsa Username:docker}
	I0314 00:46:10.036876 2077280 ssh_runner.go:195] Run: systemctl --version
	I0314 00:46:10.044166 2077280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:46:10.058862 2077280 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 00:46:10.127652 2077280 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-14 00:46:10.114393222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 00:46:10.128309 2077280 kubeconfig.go:125] found "multinode-722594" server: "https://192.168.67.2:8443"
	I0314 00:46:10.128344 2077280 api_server.go:166] Checking apiserver status ...
	I0314 00:46:10.128394 2077280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 00:46:10.142879 2077280 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1396/cgroup
	I0314 00:46:10.153673 2077280 api_server.go:182] apiserver freezer: "5:freezer:/docker/554018c47ed37db976cca72ba031d1bd6b9af23fab0a49a0b2914617eb7b7eb1/kubepods/burstable/pod029c27ba11469d513d4a01de0dfaa5ee/9dab345593daba42ae5146c756d2960c652fa17d8040cc93e63e4b9d223670e0"
	I0314 00:46:10.153764 2077280 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/554018c47ed37db976cca72ba031d1bd6b9af23fab0a49a0b2914617eb7b7eb1/kubepods/burstable/pod029c27ba11469d513d4a01de0dfaa5ee/9dab345593daba42ae5146c756d2960c652fa17d8040cc93e63e4b9d223670e0/freezer.state
	I0314 00:46:10.163810 2077280 api_server.go:204] freezer state: "THAWED"
	I0314 00:46:10.163840 2077280 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0314 00:46:10.172853 2077280 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0314 00:46:10.172882 2077280 status.go:422] multinode-722594 apiserver status = Running (err=<nil>)
	I0314 00:46:10.172894 2077280 status.go:257] multinode-722594 status: &{Name:multinode-722594 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:46:10.172917 2077280 status.go:255] checking status of multinode-722594-m02 ...
	I0314 00:46:10.173252 2077280 cli_runner.go:164] Run: docker container inspect multinode-722594-m02 --format={{.State.Status}}
	I0314 00:46:10.192386 2077280 status.go:330] multinode-722594-m02 host status = "Running" (err=<nil>)
	I0314 00:46:10.192413 2077280 host.go:66] Checking if "multinode-722594-m02" exists ...
	I0314 00:46:10.192738 2077280 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-722594-m02
	I0314 00:46:10.208949 2077280 host.go:66] Checking if "multinode-722594-m02" exists ...
	I0314 00:46:10.209276 2077280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 00:46:10.209348 2077280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-722594-m02
	I0314 00:46:10.226095 2077280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35186 SSHKeyPath:/home/jenkins/minikube-integration/18375-1958430/.minikube/machines/multinode-722594-m02/id_rsa Username:docker}
	I0314 00:46:10.322706 2077280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 00:46:10.334538 2077280 status.go:257] multinode-722594-m02 status: &{Name:multinode-722594-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:46:10.334600 2077280 status.go:255] checking status of multinode-722594-m03 ...
	I0314 00:46:10.334936 2077280 cli_runner.go:164] Run: docker container inspect multinode-722594-m03 --format={{.State.Status}}
	I0314 00:46:10.351370 2077280 status.go:330] multinode-722594-m03 host status = "Stopped" (err=<nil>)
	I0314 00:46:10.351408 2077280 status.go:343] host is not running, skipping remaining checks
	I0314 00:46:10.351417 2077280 status.go:257] multinode-722594-m03 status: &{Name:multinode-722594-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-722594 node start m03 -v=7 --alsologtostderr: (8.853131424s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-722594
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-722594
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-722594: (25.024240078s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-722594 --wait=true -v=8 --alsologtostderr
E0314 00:47:38.370861 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-722594 --wait=true -v=8 --alsologtostderr: (55.146993869s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-722594
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-722594 node delete m03: (4.834355949s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-722594 stop: (23.848279572s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-722594 status: exit status 7 (94.02418ms)

                                                
                                                
-- stdout --
	multinode-722594
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-722594-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr: exit status 7 (92.798382ms)

                                                
                                                
-- stdout --
	multinode-722594
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-722594-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 00:48:09.894984 2084875 out.go:291] Setting OutFile to fd 1 ...
	I0314 00:48:09.895172 2084875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:48:09.895187 2084875 out.go:304] Setting ErrFile to fd 2...
	I0314 00:48:09.895193 2084875 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 00:48:09.895472 2084875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 00:48:09.895663 2084875 out.go:298] Setting JSON to false
	I0314 00:48:09.895690 2084875 mustload.go:65] Loading cluster: multinode-722594
	I0314 00:48:09.895721 2084875 notify.go:220] Checking for updates...
	I0314 00:48:09.896104 2084875 config.go:182] Loaded profile config "multinode-722594": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 00:48:09.896116 2084875 status.go:255] checking status of multinode-722594 ...
	I0314 00:48:09.896591 2084875 cli_runner.go:164] Run: docker container inspect multinode-722594 --format={{.State.Status}}
	I0314 00:48:09.914127 2084875 status.go:330] multinode-722594 host status = "Stopped" (err=<nil>)
	I0314 00:48:09.914165 2084875 status.go:343] host is not running, skipping remaining checks
	I0314 00:48:09.914173 2084875 status.go:257] multinode-722594 status: &{Name:multinode-722594 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 00:48:09.914208 2084875 status.go:255] checking status of multinode-722594-m02 ...
	I0314 00:48:09.914505 2084875 cli_runner.go:164] Run: docker container inspect multinode-722594-m02 --format={{.State.Status}}
	I0314 00:48:09.930581 2084875 status.go:330] multinode-722594-m02 host status = "Stopped" (err=<nil>)
	I0314 00:48:09.930599 2084875 status.go:343] host is not running, skipping remaining checks
	I0314 00:48:09.930607 2084875 status.go:257] multinode-722594-m02 status: &{Name:multinode-722594-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (45.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-722594 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0314 00:48:35.417570 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-722594 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (45.005001535s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-722594 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (45.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-722594
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-722594-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-722594-m02 --driver=docker  --container-runtime=containerd: exit status 14 (99.086919ms)

                                                
                                                
-- stdout --
	* [multinode-722594-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-722594-m02' is duplicated with machine name 'multinode-722594-m02' in profile 'multinode-722594'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-722594-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-722594-m03 --driver=docker  --container-runtime=containerd: (31.180923627s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-722594
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-722594: exit status 80 (364.72529ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-722594 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-722594-m03 already exists in multinode-722594-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-722594-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-722594-m03: (1.950159549s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.66s)

                                                
                                    
x
+
TestPreload (125.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-511548 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0314 00:49:58.464422 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-511548 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m17.311505888s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-511548 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-511548 image pull gcr.io/k8s-minikube/busybox: (1.318657051s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-511548
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-511548: (12.038536948s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-511548 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-511548 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (31.81958491s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-511548 image list
helpers_test.go:175: Cleaning up "test-preload-511548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-511548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-511548: (2.363548797s)
--- PASS: TestPreload (125.12s)

                                                
                                    
x
+
TestScheduledStopUnix (107.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-129391 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-129391 --memory=2048 --driver=docker  --container-runtime=containerd: (31.354351514s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-129391 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-129391 -n scheduled-stop-129391
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-129391 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-129391 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-129391 -n scheduled-stop-129391
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-129391
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-129391 --schedule 15s
E0314 00:52:38.371379 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-129391
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-129391: exit status 7 (81.731867ms)

                                                
                                                
-- stdout --
	scheduled-stop-129391
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-129391 -n scheduled-stop-129391
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-129391 -n scheduled-stop-129391: exit status 7 (85.473423ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-129391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-129391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-129391: (4.73704695s)
--- PASS: TestScheduledStopUnix (107.75s)

                                                
                                    
x
+
TestInsufficientStorage (10.28s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-432051 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-432051 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.801691327s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"63a58ea4-48bf-4f24-8522-d4fa968a8067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-432051] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae19bd37-14ef-4d3e-8aaf-7234f7df3ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18375"}}
	{"specversion":"1.0","id":"1971b6e0-2967-4c01-bba2-a0eb1e2cf13f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"058e1897-ff16-435e-aaa1-1c6a3ac160a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig"}}
	{"specversion":"1.0","id":"7d906e2b-9078-4b6d-8886-d2f2828b81f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube"}}
	{"specversion":"1.0","id":"3b03b2bd-b38e-4220-8605-e326bfbf0fb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"13e732fc-8d53-4bdd-aa84-8c86e5c14aa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d364f3bb-be12-47fb-be58-0822506ebe5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"89bb8234-dd00-472a-b7bf-2add6ccc3aea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7c416e38-21ee-4960-a781-bf5febbf3b83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e08bdede-c738-45e6-952f-06c7daff2eec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"5827aa5e-4ae6-4c01-a8c0-1bca8320c067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-432051\" primary control-plane node in \"insufficient-storage-432051\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"197bc17d-07b7-4d99-80b5-1540030e4605","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"81d4dc67-ad4e-4e68-b89f-91b52774b6d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"27c0f12f-1e63-4582-a2d5-0a392cb3d678","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-432051 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-432051 --output=json --layout=cluster: exit status 7 (284.770495ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-432051","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-432051","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:34.270656 2102442 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-432051" does not appear in /home/jenkins/minikube-integration/18375-1958430/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-432051 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-432051 --output=json --layout=cluster: exit status 7 (298.239277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-432051","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-432051","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 00:53:34.571873 2102494 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-432051" does not appear in /home/jenkins/minikube-integration/18375-1958430/kubeconfig
	E0314 00:53:34.582473 2102494 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/insufficient-storage-432051/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-432051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-432051
E0314 00:53:35.418026 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-432051: (1.892348892s)
--- PASS: TestInsufficientStorage (10.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3544111081 start -p running-upgrade-814999 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0314 00:58:35.417537 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3544111081 start -p running-upgrade-814999 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.918144172s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-814999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-814999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (35.2717528s)
helpers_test.go:175: Cleaning up "running-upgrade-814999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-814999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-814999: (2.674218388s)
--- PASS: TestRunningBinaryUpgrade (82.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.679918059s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-146510
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-146510: (1.240567431s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-146510 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-146510 status --format={{.Host}}: exit status 7 (81.068352ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m2.871013059s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-146510 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (135.537787ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-146510] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-146510
	    minikube start -p kubernetes-upgrade-146510 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1465102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-146510 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-146510 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.99348327s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-146510" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-146510
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-146510: (3.242613701s)
--- PASS: TestKubernetesUpgrade (381.43s)

                                                
                                    
x
+
TestMissingContainerUpgrade (174.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2866527414 start -p missing-upgrade-443613 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2866527414 start -p missing-upgrade-443613 --memory=2200 --driver=docker  --container-runtime=containerd: (1m13.944956731s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-443613
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-443613: (13.681083416s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-443613
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-443613 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-443613 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m22.664463162s)
helpers_test.go:175: Cleaning up "missing-upgrade-443613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-443613
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-443613: (2.64955803s)
--- PASS: TestMissingContainerUpgrade (174.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (90.474525ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-482251] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482251 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482251 --driver=docker  --container-runtime=containerd: (39.184770443s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482251 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --driver=docker  --container-runtime=containerd: (17.423262747s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-482251 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-482251 status -o json: exit status 2 (420.213778ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-482251","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-482251
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-482251: (2.053029152s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482251 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.562630737s)
--- PASS: TestNoKubernetes/serial/Start (9.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482251 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482251 "sudo systemctl is-active --quiet service kubelet": exit status 1 (365.284723ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (1.057510365s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-482251
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-482251: (1.243658675s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-482251 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-482251 --driver=docker  --container-runtime=containerd: (6.629450045s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-482251 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-482251 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.54892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (104.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3781294387 start -p stopped-upgrade-266539 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3781294387 start -p stopped-upgrade-266539 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (42.654296033s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3781294387 -p stopped-upgrade-266539 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3781294387 -p stopped-upgrade-266539 stop: (19.933951936s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-266539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0314 00:57:38.380352 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-266539 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.608607398s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (104.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-266539
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-266539: (1.057582332s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestPause/serial/Start (60.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-350551 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0314 01:00:41.436166 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-350551 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m0.573271258s)
--- PASS: TestPause/serial/Start (60.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-350551 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-350551 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.281764254s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.29s)

                                                
                                    
x
+
TestPause/serial/Pause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-350551 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-350551 --alsologtostderr -v=5: (1.053018965s)
--- PASS: TestPause/serial/Pause (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-350551 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-350551 --output=json --layout=cluster: exit status 2 (431.638899ms)

                                                
                                                
-- stdout --
	{"Name":"pause-350551","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-350551","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-350551 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-350551 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-350551 --alsologtostderr -v=5: (1.102990592s)
--- PASS: TestPause/serial/PauseAgain (1.10s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-350551 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-350551 --alsologtostderr -v=5: (2.9267104s)
--- PASS: TestPause/serial/DeletePaused (2.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-350551
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-350551: exit status 1 (42.366364ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-350551: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-355815 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-355815 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (264.066862ms)

                                                
                                                
-- stdout --
	* [false-355815] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 01:01:23.768154 2142635 out.go:291] Setting OutFile to fd 1 ...
	I0314 01:01:23.776720 2142635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:01:23.777242 2142635 out.go:304] Setting ErrFile to fd 2...
	I0314 01:01:23.777265 2142635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 01:01:23.777562 2142635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18375-1958430/.minikube/bin
	I0314 01:01:23.778121 2142635 out.go:298] Setting JSON to false
	I0314 01:01:23.779191 2142635 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31434,"bootTime":1710346650,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 01:01:23.779332 2142635 start.go:139] virtualization:  
	I0314 01:01:23.782153 2142635 out.go:177] * [false-355815] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 01:01:23.785107 2142635 out.go:177]   - MINIKUBE_LOCATION=18375
	I0314 01:01:23.785173 2142635 notify.go:220] Checking for updates...
	I0314 01:01:23.790205 2142635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 01:01:23.792776 2142635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18375-1958430/kubeconfig
	I0314 01:01:23.795040 2142635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18375-1958430/.minikube
	I0314 01:01:23.797231 2142635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 01:01:23.799100 2142635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 01:01:23.801783 2142635 config.go:182] Loaded profile config "force-systemd-flag-804124": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0314 01:01:23.801931 2142635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 01:01:23.824275 2142635 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 01:01:23.824412 2142635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 01:01:23.940415 2142635 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 01:01:23.929455567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 01:01:23.940518 2142635 docker.go:295] overlay module found
	I0314 01:01:23.942870 2142635 out.go:177] * Using the docker driver based on user configuration
	I0314 01:01:23.944756 2142635 start.go:297] selected driver: docker
	I0314 01:01:23.944771 2142635 start.go:901] validating driver "docker" against <nil>
	I0314 01:01:23.944785 2142635 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 01:01:23.947189 2142635 out.go:177] 
	W0314 01:01:23.949167 2142635 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0314 01:01:23.951571 2142635 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-355815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-355815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-355815"

                                                
                                                
----------------------- debugLogs end: false-355815 [took: 4.971524236s] --------------------------------
helpers_test.go:175: Cleaning up "false-355815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-355815
--- PASS: TestNetworkPlugins/group/false (5.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-023742 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0314 01:03:35.417936 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-023742 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m43.421268227s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (82.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-183952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-183952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m22.350833691s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (82.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-023742 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1c8a064f-f0e2-4871-83b5-75720d83f0e3] Pending
helpers_test.go:344: "busybox" [1c8a064f-f0e2-4871-83b5-75720d83f0e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1c8a064f-f0e2-4871-83b5-75720d83f0e3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004926739s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-023742 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-023742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-023742 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.213590462s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-023742 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-023742 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-023742 --alsologtostderr -v=3: (12.733271296s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023742 -n old-k8s-version-023742
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-023742 -n old-k8s-version-023742: exit status 7 (93.987988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-023742 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-183952 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b7d3c78e-0888-4c16-a802-4861e05737cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b7d3c78e-0888-4c16-a802-4861e05737cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003259223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-183952 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-183952 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-183952 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.096839691s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-183952 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-183952 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-183952 --alsologtostderr -v=3: (12.094712999s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-183952 -n no-preload-183952
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-183952 -n no-preload-183952: exit status 7 (83.318842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-183952 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (281.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-183952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0314 01:07:38.371635 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 01:08:35.417871 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-183952 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m40.74344727s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-183952 -n no-preload-183952
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (281.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4p698" [b1de037e-0c57-4aed-bd00-c6b61a831f67] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003709605s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4p698" [b1de037e-0c57-4aed-bd00-c6b61a831f67] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004280257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-183952 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-183952 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-183952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-183952 --alsologtostderr -v=1: (1.046339051s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-183952 -n no-preload-183952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-183952 -n no-preload-183952: exit status 2 (464.741589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-183952 -n no-preload-183952
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-183952 -n no-preload-183952: exit status 2 (466.6095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-183952 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-183952 -n no-preload-183952
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-183952 -n no-preload-183952
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2g5f" [d97d7a7f-d71a-4364-b193-eb2874b60eb3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003757822s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-480250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-480250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m6.761360732s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-s2g5f" [d97d7a7f-d71a-4364-b193-eb2874b60eb3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00367402s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-023742 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-023742 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-023742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-023742 --alsologtostderr -v=1: (1.508142352s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023742 -n old-k8s-version-023742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023742 -n old-k8s-version-023742: exit status 2 (452.321939ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023742 -n old-k8s-version-023742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023742 -n old-k8s-version-023742: exit status 2 (336.682845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-023742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-023742 -n old-k8s-version-023742
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-023742 -n old-k8s-version-023742
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-465054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 01:12:38.371557 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-465054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m9.652969641s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-480250 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e569aeb7-2d34-4920-93b0-4232579c91b3] Pending
helpers_test.go:344: "busybox" [e569aeb7-2d34-4920-93b0-4232579c91b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e569aeb7-2d34-4920-93b0-4232579c91b3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004266041s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-480250 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-480250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-480250 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070603146s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-480250 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-480250 --alsologtostderr -v=3
E0314 01:13:35.417719 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-480250 --alsologtostderr -v=3: (12.090501251s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-465054 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [289a219f-2c20-4bf5-8881-cfb80dd10fd8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [289a219f-2c20-4bf5-8881-cfb80dd10fd8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004759664s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-465054 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-480250 -n embed-certs-480250
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-480250 -n embed-certs-480250: exit status 7 (86.082061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-480250 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-480250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-480250 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m49.649898703s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-480250 -n embed-certs-480250
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-465054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-465054 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.370927698s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-465054 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-465054 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-465054 --alsologtostderr -v=3: (12.367458096s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054: exit status 7 (85.399955ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-465054 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-465054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0314 01:15:38.143687 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.148955 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.159252 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.179571 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.219940 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.300226 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.460633 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:38.781034 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:39.421337 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:40.702109 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:43.262503 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:48.383506 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:15:58.624314 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:16:19.105152 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:16:57.658286 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.663540 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.673788 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.694032 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.734289 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.814649 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:57.975012 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:58.295463 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:16:58.936309 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:17:00.065423 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:17:00.221121 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:17:02.783367 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:17:07.904200 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:17:18.145348 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:17:21.436977 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 01:17:38.371267 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
E0314 01:17:38.625514 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:18:19.585960 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
E0314 01:18:21.985694 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:18:35.418277 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/functional-362954/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-465054 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m0.033500872s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dm4wl" [3527a280-0c7d-4d86-8119-ed4cbf7c33f4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004881748s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dm4wl" [3527a280-0c7d-4d86-8119-ed4cbf7c33f4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004698264s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-480250 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-480250 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-480250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-480250 -n embed-certs-480250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-480250 -n embed-certs-480250: exit status 2 (347.522719ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-480250 -n embed-certs-480250
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-480250 -n embed-certs-480250: exit status 2 (349.690468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-480250 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-480250 -n embed-certs-480250
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-480250 -n embed-certs-480250
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-743041 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-743041 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (45.789945154s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7st4r" [e9367c14-6ba3-4e7a-8729-215a064191bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004723279s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7st4r" [e9367c14-6ba3-4e7a-8729-215a064191bb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003749714s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-465054 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-465054 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-465054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-465054 --alsologtostderr -v=1: (1.256751188s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054: exit status 2 (496.93652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054: exit status 2 (462.97025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-465054 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-465054 -n default-k8s-diff-port-465054
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m10.702301908s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-743041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0314 01:19:41.506559 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-743041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.639601475s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-743041 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-743041 --alsologtostderr -v=3: (1.312143694s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-743041 -n newest-cni-743041
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-743041 -n newest-cni-743041: exit status 7 (120.652135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-743041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-743041 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-743041 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (21.300499029s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-743041 -n newest-cni-743041
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-743041 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-743041 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-743041 --alsologtostderr -v=1: (1.128725601s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-743041 -n newest-cni-743041
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-743041 -n newest-cni-743041: exit status 2 (449.313897ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-743041 -n newest-cni-743041
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-743041 -n newest-cni-743041: exit status 2 (344.170765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-743041 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-743041 -n newest-cni-743041
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-743041 -n newest-cni-743041
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.73s)
E0314 01:25:37.986660 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:37.991926 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.002163 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.022431 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.062666 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.142974 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.144129 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
E0314 01:25:38.303459 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:38.624225 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:39.265076 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:40.546100 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:43.106223 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:48.227048 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory
E0314 01:25:58.468102 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/auto-355815/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m6.753067254s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fmx5q" [f94199e2-045d-46b7-9486-3a7fbad32209] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0314 01:20:38.144021 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/old-k8s-version-023742/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fmx5q" [f94199e2-045d-46b7-9486-3a7fbad32209] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.00516145s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.751171882s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-9zvwx" [500f682d-9023-4176-9062-866dd2b7e555] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004914587s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sfsnx" [0f1b1653-5229-4df6-977d-f6a2113ba9d8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sfsnx" [0f1b1653-5229-4df6-977d-f6a2113ba9d8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004301985s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0314 01:22:25.346768 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/no-preload-183952/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m5.41321849s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rz7qz" [5adedd33-f6f6-44d6-b45f-7190c93b23de] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005914513s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tq22j" [c2ed6f71-5e22-4eb7-80aa-cd709946de28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0314 01:22:38.370629 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/addons-122411/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-tq22j" [c2ed6f71-5e22-4eb7-80aa-cd709946de28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004242336s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8b7s4" [f44d6bd0-10ee-4047-b93d-b96f75bd2b73] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8b7s4" [f44d6bd0-10ee-4047-b93d-b96f75bd2b73] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004822604s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m30.92916633s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (64.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0314 01:23:43.634870 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:43.655099 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:43.695275 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:43.776136 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:43.936295 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:44.259314 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:44.899595 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:46.179790 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:48.739932 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:23:53.860673 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:24:04.100876 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
E0314 01:24:24.582021 1963897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18375-1958430/.minikube/profiles/default-k8s-diff-port-465054/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m4.729855042s)
--- PASS: TestNetworkPlugins/group/flannel/Start (64.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mgnfc" [4e48e88e-7939-4c8d-9747-14a1379d1f52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mgnfc" [4e48e88e-7939-4c8d-9747-14a1379d1f52] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004105044s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d2f7s" [4d2579a5-3e99-4ad0-ab66-db7072fcec93] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004172407s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r96qb" [ce0f2ac0-8db4-4ef0-81a2-ee64d4fff4b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r96qb" [ce0f2ac0-8db4-4ef0-81a2-ee64d4fff4b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004684213s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-355815 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (46.061642446s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-355815 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-355815 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8dw85" [78c5958d-2920-4aec-8e59-874fd5ed982c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8dw85" [78c5958d-2920-4aec-8e59-874fd5ed982c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003814565s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-355815 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-355815 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-976036 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-976036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-976036
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-528284" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-528284
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-355815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-355815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-355815"

                                                
                                                
----------------------- debugLogs end: kubenet-355815 [took: 4.700450724s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-355815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-355815
--- SKIP: TestNetworkPlugins/group/kubenet (4.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-355815 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-355815" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-355815

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-355815" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-355815"

                                                
                                                
----------------------- debugLogs end: cilium-355815 [took: 6.506739937s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-355815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-355815
--- SKIP: TestNetworkPlugins/group/cilium (6.69s)

                                                
                                    
Copied to clipboard