Test Report: Docker_Linux_containerd_arm64 18320

                    
                      135c3c98ed62ac5bccf3530555abd368cdd0fde3:2024-03-07:33456
                    
                

Test fail (8/335)

x
+
TestAddons/parallel/Ingress (39.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-963512 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-963512 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-963512 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [57164078-4edf-4a1c-8210-492e6b8185ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [57164078-4edf-4a1c-8210-492e6b8185ae] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.029735156s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-963512 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.081703891s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 addons disable ingress-dns --alsologtostderr -v=1: (1.632748355s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 addons disable ingress --alsologtostderr -v=1: (7.778656929s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-963512
helpers_test.go:235: (dbg) docker inspect addons-963512:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5",
	        "Created": "2024-03-07T21:47:54.802815369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T21:47:55.161570378Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/hosts",
	        "LogPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5-json.log",
	        "Name": "/addons-963512",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-963512:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-963512",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832-init/diff:/var/lib/docker/overlay2/6822645c415ab3e3451f0dc6746bf9aea38c91b1070d7030c1ba88a1ef7f69e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/merged",
	                "UpperDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/diff",
	                "WorkDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-963512",
	                "Source": "/var/lib/docker/volumes/addons-963512/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-963512",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-963512",
	                "name.minikube.sigs.k8s.io": "addons-963512",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3dd1803eafd80632ff2dcf736a6697f6dc853c0dd6401c9973bcb636be1dc5fa",
	            "SandboxKey": "/var/run/docker/netns/3dd1803eafd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-963512": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "268fd7ac0849",
	                        "addons-963512"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "8843454e1a2e9e62c7cd83baebf63228759ac1040db402b7b3da8ab031c3b9fb",
	                    "EndpointID": "46970578f819585d5a2ff047cfd1a0b4b9da6fd433a9be6f983a9cce33fb7164",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-963512",
	                        "268fd7ac0849"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-963512 -n addons-963512
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 logs -n 25: (1.417537284s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-526545              | download-only-526545   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | -o=json --download-only              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | -p download-only-336944              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-336944              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-150781              | download-only-150781   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-526545              | download-only-526545   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-336944              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | --download-only -p                   | download-docker-983577 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | download-docker-983577               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-983577            | download-docker-983577 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | --download-only -p                   | binary-mirror-880422   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | binary-mirror-880422                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39807               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-880422              | binary-mirror-880422   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| addons  | enable dashboard -p                  | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| start   | -p addons-963512 --wait=true         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:49 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-963512 ip                     | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:50 UTC |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| ssh     | addons-963512 ssh curl -s            | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-963512 ip                     | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC |                     |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 21:47:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 21:47:30.856221    8582 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:47:30.856384    8582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:30.856395    8582 out.go:304] Setting ErrFile to fd 2...
	I0307 21:47:30.856401    8582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:30.856646    8582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:47:30.857070    8582 out.go:298] Setting JSON to false
	I0307 21:47:30.857781    8582 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1794,"bootTime":1709846257,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:47:30.857842    8582 start.go:139] virtualization:  
	I0307 21:47:30.860880    8582 out.go:177] * [addons-963512] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:47:30.863608    8582 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 21:47:30.865641    8582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:47:30.863795    8582 notify.go:220] Checking for updates...
	I0307 21:47:30.870296    8582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:47:30.872751    8582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:47:30.875270    8582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 21:47:30.877572    8582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 21:47:30.879830    8582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:47:30.900582    8582 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:47:30.900702    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:30.972380    8582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:30.963315752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:30.972486    8582 docker.go:295] overlay module found
	I0307 21:47:30.974746    8582 out.go:177] * Using the docker driver based on user configuration
	I0307 21:47:30.976500    8582 start.go:297] selected driver: docker
	I0307 21:47:30.976516    8582 start.go:901] validating driver "docker" against <nil>
	I0307 21:47:30.976529    8582 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 21:47:30.977153    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:31.032311    8582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:31.024064167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:31.032470    8582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 21:47:31.032695    8582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 21:47:31.035153    8582 out.go:177] * Using Docker driver with root privileges
	I0307 21:47:31.037434    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:47:31.037461    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:31.037474    8582 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 21:47:31.037555    8582 start.go:340] cluster config:
	{Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:47:31.039910    8582 out.go:177] * Starting "addons-963512" primary control-plane node in "addons-963512" cluster
	I0307 21:47:31.041912    8582 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 21:47:31.043839    8582 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 21:47:31.046058    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:31.046103    8582 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 21:47:31.046117    8582 cache.go:56] Caching tarball of preloaded images
	I0307 21:47:31.046149    8582 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 21:47:31.046204    8582 preload.go:173] Found /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 21:47:31.046214    8582 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 21:47:31.046581    8582 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json ...
	I0307 21:47:31.046612    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json: {Name:mkfaff1358e3290a9e5529ff48ed6fe910f98aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:31.060479    8582 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:47:31.060590    8582 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 21:47:31.060618    8582 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 21:47:31.060627    8582 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 21:47:31.060635    8582 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 21:47:31.060648    8582 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0307 21:47:46.769630    8582 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0307 21:47:46.769662    8582 cache.go:194] Successfully downloaded all kic artifacts
	I0307 21:47:46.769690    8582 start.go:360] acquireMachinesLock for addons-963512: {Name:mkc22c72bf972f547a77fb9031585d63b88d0bcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 21:47:46.769804    8582 start.go:364] duration metric: took 86.925µs to acquireMachinesLock for "addons-963512"
	I0307 21:47:46.769828    8582 start.go:93] Provisioning new machine with config: &{Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 21:47:46.769900    8582 start.go:125] createHost starting for "" (driver="docker")
	I0307 21:47:46.772846    8582 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0307 21:47:46.773117    8582 start.go:159] libmachine.API.Create for "addons-963512" (driver="docker")
	I0307 21:47:46.773155    8582 client.go:168] LocalClient.Create starting
	I0307 21:47:46.773288    8582 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem
	I0307 21:47:46.986226    8582 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem
	I0307 21:47:48.041100    8582 cli_runner.go:164] Run: docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 21:47:48.056457    8582 cli_runner.go:211] docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 21:47:48.056546    8582 network_create.go:281] running [docker network inspect addons-963512] to gather additional debugging logs...
	I0307 21:47:48.056570    8582 cli_runner.go:164] Run: docker network inspect addons-963512
	W0307 21:47:48.072845    8582 cli_runner.go:211] docker network inspect addons-963512 returned with exit code 1
	I0307 21:47:48.072880    8582 network_create.go:284] error running [docker network inspect addons-963512]: docker network inspect addons-963512: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-963512 not found
	I0307 21:47:48.072912    8582 network_create.go:286] output of [docker network inspect addons-963512]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-963512 not found
	
	** /stderr **
	I0307 21:47:48.073022    8582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 21:47:48.090405    8582 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004bbfb0}
	I0307 21:47:48.090453    8582 network_create.go:124] attempt to create docker network addons-963512 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0307 21:47:48.090518    8582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-963512 addons-963512
	I0307 21:47:48.153355    8582 network_create.go:108] docker network addons-963512 192.168.49.0/24 created
	I0307 21:47:48.153387    8582 kic.go:121] calculated static IP "192.168.49.2" for the "addons-963512" container
	I0307 21:47:48.153458    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 21:47:48.167103    8582 cli_runner.go:164] Run: docker volume create addons-963512 --label name.minikube.sigs.k8s.io=addons-963512 --label created_by.minikube.sigs.k8s.io=true
	I0307 21:47:48.183033    8582 oci.go:103] Successfully created a docker volume addons-963512
	I0307 21:47:48.183124    8582 cli_runner.go:164] Run: docker run --rm --name addons-963512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --entrypoint /usr/bin/test -v addons-963512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 21:47:50.448514    8582 cli_runner.go:217] Completed: docker run --rm --name addons-963512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --entrypoint /usr/bin/test -v addons-963512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.265346309s)
	I0307 21:47:50.448544    8582 oci.go:107] Successfully prepared a docker volume addons-963512
	I0307 21:47:50.448568    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:50.448587    8582 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 21:47:50.448678    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-963512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 21:47:54.732239    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-963512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.283510113s)
	I0307 21:47:54.732297    8582 kic.go:203] duration metric: took 4.283706092s to extract preloaded images to volume ...
	W0307 21:47:54.732452    8582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 21:47:54.732606    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 21:47:54.786769    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-963512 --name addons-963512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-963512 --network addons-963512 --ip 192.168.49.2 --volume addons-963512:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0307 21:47:55.171086    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Running}}
	I0307 21:47:55.192903    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:55.220068    8582 cli_runner.go:164] Run: docker exec addons-963512 stat /var/lib/dpkg/alternatives/iptables
	I0307 21:47:55.288088    8582 oci.go:144] the created container "addons-963512" has a running status.
	I0307 21:47:55.288170    8582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa...
	I0307 21:47:56.086307    8582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 21:47:56.116585    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:56.141605    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 21:47:56.141624    8582 kic_runner.go:114] Args: [docker exec --privileged addons-963512 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 21:47:56.213157    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:56.231609    8582 machine.go:94] provisionDockerMachine start ...
	I0307 21:47:56.231697    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.249799    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.250177    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.250190    8582 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 21:47:56.383743    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-963512
	
	I0307 21:47:56.383767    8582 ubuntu.go:169] provisioning hostname "addons-963512"
	I0307 21:47:56.383828    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.402522    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.402776    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.402793    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-963512 && echo "addons-963512" | sudo tee /etc/hostname
	I0307 21:47:56.544825    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-963512
	
	I0307 21:47:56.544922    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.562226    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.562473    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.562488    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-963512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-963512/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-963512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 21:47:56.693258    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 21:47:56.693280    8582 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18320-2408/.minikube CaCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18320-2408/.minikube}
	I0307 21:47:56.693311    8582 ubuntu.go:177] setting up certificates
	I0307 21:47:56.693321    8582 provision.go:84] configureAuth start
	I0307 21:47:56.693406    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:56.713041    8582 provision.go:143] copyHostCerts
	I0307 21:47:56.713125    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem (1078 bytes)
	I0307 21:47:56.713275    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem (1123 bytes)
	I0307 21:47:56.713365    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem (1675 bytes)
	I0307 21:47:56.713427    8582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem org=jenkins.addons-963512 san=[127.0.0.1 192.168.49.2 addons-963512 localhost minikube]
	I0307 21:47:57.383426    8582 provision.go:177] copyRemoteCerts
	I0307 21:47:57.383493    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 21:47:57.383537    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.399023    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.492861    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 21:47:57.517269    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 21:47:57.542230    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 21:47:57.565875    8582 provision.go:87] duration metric: took 872.541484ms to configureAuth
	I0307 21:47:57.565944    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0307 21:47:57.566165    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:47:57.566179    8582 machine.go:97] duration metric: took 1.334551415s to provisionDockerMachine
	I0307 21:47:57.566192    8582 client.go:171] duration metric: took 10.793022211s to LocalClient.Create
	I0307 21:47:57.566210    8582 start.go:167] duration metric: took 10.793094179s to libmachine.API.Create "addons-963512"
	I0307 21:47:57.566224    8582 start.go:293] postStartSetup for "addons-963512" (driver="docker")
	I0307 21:47:57.566234    8582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 21:47:57.566288    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 21:47:57.566337    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.581586    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.673193    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 21:47:57.676294    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 21:47:57.676407    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 21:47:57.676428    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 21:47:57.676436    8582 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 21:47:57.676445    8582 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/addons for local assets ...
	I0307 21:47:57.676515    8582 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/files for local assets ...
	I0307 21:47:57.676542    8582 start.go:296] duration metric: took 110.311242ms for postStartSetup
	I0307 21:47:57.676854    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:57.693212    8582 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json ...
	I0307 21:47:57.693487    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 21:47:57.693537    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.711632    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.801217    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 21:47:57.805778    8582 start.go:128] duration metric: took 11.035863034s to createHost
	I0307 21:47:57.805803    8582 start.go:83] releasing machines lock for "addons-963512", held for 11.035990772s
	I0307 21:47:57.805902    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:57.822355    8582 ssh_runner.go:195] Run: cat /version.json
	I0307 21:47:57.822401    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 21:47:57.822555    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.822407    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.848947    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.852026    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:58.063399    8582 ssh_runner.go:195] Run: systemctl --version
	I0307 21:47:58.067742    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 21:47:58.071923    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 21:47:58.096697    8582 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 21:47:58.096771    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 21:47:58.126428    8582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 21:47:58.126494    8582 start.go:494] detecting cgroup driver to use...
	I0307 21:47:58.126551    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 21:47:58.126629    8582 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 21:47:58.139503    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 21:47:58.150850    8582 docker.go:217] disabling cri-docker service (if available) ...
	I0307 21:47:58.150941    8582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 21:47:58.164918    8582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 21:47:58.179810    8582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 21:47:58.259103    8582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 21:47:58.352417    8582 docker.go:233] disabling docker service ...
	I0307 21:47:58.352499    8582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 21:47:58.372393    8582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 21:47:58.384514    8582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 21:47:58.464789    8582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 21:47:58.552249    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 21:47:58.563803    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 21:47:58.581715    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 21:47:58.591987    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 21:47:58.601874    8582 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 21:47:58.602000    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 21:47:58.611674    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 21:47:58.621410    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 21:47:58.630875    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 21:47:58.640641    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 21:47:58.649953    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 21:47:58.659390    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 21:47:58.667982    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 21:47:58.676095    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:47:58.761491    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 21:47:58.892296    8582 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 21:47:58.892375    8582 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 21:47:58.895807    8582 start.go:562] Will wait 60s for crictl version
	I0307 21:47:58.895876    8582 ssh_runner.go:195] Run: which crictl
	I0307 21:47:58.899177    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 21:47:58.945787    8582 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 21:47:58.945893    8582 ssh_runner.go:195] Run: containerd --version
	I0307 21:47:58.966474    8582 ssh_runner.go:195] Run: containerd --version
	I0307 21:47:58.991151    8582 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0307 21:47:58.993564    8582 cli_runner.go:164] Run: docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 21:47:59.009426    8582 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0307 21:47:59.013109    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 21:47:59.023712    8582 kubeadm.go:877] updating cluster {Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 21:47:59.023842    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:59.023911    8582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 21:47:59.066001    8582 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 21:47:59.066025    8582 containerd.go:519] Images already preloaded, skipping extraction
	I0307 21:47:59.066118    8582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 21:47:59.102076    8582 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 21:47:59.102098    8582 cache_images.go:84] Images are preloaded, skipping loading
	I0307 21:47:59.102107    8582 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0307 21:47:59.102214    8582 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-963512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 21:47:59.102318    8582 ssh_runner.go:195] Run: sudo crictl info
	I0307 21:47:59.138900    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:47:59.138921    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:59.138931    8582 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 21:47:59.138953    8582 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-963512 NodeName:addons-963512 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 21:47:59.139092    8582 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-963512"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 21:47:59.139179    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 21:47:59.148326    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 21:47:59.148439    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 21:47:59.157065    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0307 21:47:59.174717    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 21:47:59.194886    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0307 21:47:59.213118    8582 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0307 21:47:59.216335    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 21:47:59.226878    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:47:59.303895    8582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 21:47:59.319283    8582 certs.go:68] Setting up /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512 for IP: 192.168.49.2
	I0307 21:47:59.319346    8582 certs.go:194] generating shared ca certs ...
	I0307 21:47:59.319376    8582 certs.go:226] acquiring lock for ca certs: {Name:mk7f303c61c8508a802bee4114a394243ccd109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.319550    8582 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key
	I0307 21:47:59.607025    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt ...
	I0307 21:47:59.607059    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt: {Name:mkfbbe6943bc19d717b500158cbceb169ba4756a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.607252    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key ...
	I0307 21:47:59.607265    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key: {Name:mkbb3672b232f77a77f948f9cc6992fc9b82b64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.607352    8582 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key
	I0307 21:47:59.947161    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt ...
	I0307 21:47:59.947213    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt: {Name:mk9aa750b39dfb459a036c45219993e3675189b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.947448    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key ...
	I0307 21:47:59.947465    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key: {Name:mk7da6cab51d06694701335e0cd5746ee1c580c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.947581    8582 certs.go:256] generating profile certs ...
	I0307 21:47:59.947654    8582 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key
	I0307 21:47:59.947671    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt with IP's: []
	I0307 21:48:00.428445    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt ...
	I0307 21:48:00.428532    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: {Name:mk7395f79e624ececb255077e9cbeba412d3c048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:00.428801    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key ...
	I0307 21:48:00.428842    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key: {Name:mk1d5c95cb3c70b545bc143fd89b2c9b6bd00cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:00.428987    8582 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529
	I0307 21:48:00.429036    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0307 21:48:01.189312    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 ...
	I0307 21:48:01.189350    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529: {Name:mk0b95b44ed83c74e8cc54fd259198587f90b661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.189561    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529 ...
	I0307 21:48:01.189577    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529: {Name:mk68d2c56784a39a9401fa891f59e1e221f2a63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.189675    8582 certs.go:381] copying /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 -> /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt
	I0307 21:48:01.189771    8582 certs.go:385] copying /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529 -> /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key
	I0307 21:48:01.189830    8582 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key
	I0307 21:48:01.189854    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt with IP's: []
	I0307 21:48:01.724770    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt ...
	I0307 21:48:01.724801    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt: {Name:mke9397b127f14849a7627474f57d44162a7e32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.724976    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key ...
	I0307 21:48:01.724990    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key: {Name:mk78eda69ae2c05925b3f3fed3a9ccf0d9de591c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.725182    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 21:48:01.725223    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem (1078 bytes)
	I0307 21:48:01.725251    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem (1123 bytes)
	I0307 21:48:01.725286    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem (1675 bytes)
	I0307 21:48:01.725857    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 21:48:01.752052    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 21:48:01.775244    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 21:48:01.800248    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 21:48:01.823983    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 21:48:01.848750    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 21:48:01.872115    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 21:48:01.895693    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 21:48:01.919389    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 21:48:01.946379    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 21:48:01.965504    8582 ssh_runner.go:195] Run: openssl version
	I0307 21:48:01.971285    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 21:48:01.981327    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.985233    8582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.985339    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.992664    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 21:48:02.003677    8582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 21:48:02.009691    8582 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 21:48:02.009781    8582 kubeadm.go:391] StartCluster: {Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:48:02.009864    8582 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 21:48:02.009927    8582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 21:48:02.053741    8582 cri.go:89] found id: ""
	I0307 21:48:02.053812    8582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 21:48:02.062557    8582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 21:48:02.071478    8582 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0307 21:48:02.071563    8582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 21:48:02.080328    8582 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 21:48:02.080347    8582 kubeadm.go:156] found existing configuration files:
	
	I0307 21:48:02.080419    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 21:48:02.090199    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 21:48:02.090271    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 21:48:02.098751    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 21:48:02.107397    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 21:48:02.107507    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 21:48:02.115698    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 21:48:02.124355    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 21:48:02.124435    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 21:48:02.133228    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 21:48:02.141847    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 21:48:02.141907    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 21:48:02.150293    8582 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 21:48:02.197355    8582 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 21:48:02.197437    8582 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 21:48:02.238490    8582 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0307 21:48:02.238580    8582 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0307 21:48:02.238634    8582 kubeadm.go:309] OS: Linux
	I0307 21:48:02.238693    8582 kubeadm.go:309] CGROUPS_CPU: enabled
	I0307 21:48:02.238758    8582 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0307 21:48:02.238819    8582 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0307 21:48:02.238875    8582 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0307 21:48:02.238936    8582 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0307 21:48:02.238997    8582 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0307 21:48:02.239054    8582 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0307 21:48:02.239114    8582 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0307 21:48:02.239172    8582 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0307 21:48:02.317636    8582 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 21:48:02.317839    8582 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 21:48:02.317953    8582 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 21:48:02.532668    8582 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 21:48:02.535366    8582 out.go:204]   - Generating certificates and keys ...
	I0307 21:48:02.535537    8582 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 21:48:02.535621    8582 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 21:48:03.119922    8582 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 21:48:03.901933    8582 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 21:48:04.516506    8582 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 21:48:04.623527    8582 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 21:48:05.178652    8582 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 21:48:05.178871    8582 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-963512 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 21:48:05.577145    8582 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 21:48:05.577502    8582 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-963512 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 21:48:05.882745    8582 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 21:48:06.079949    8582 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 21:48:06.373079    8582 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 21:48:06.373503    8582 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 21:48:06.497702    8582 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 21:48:06.889410    8582 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 21:48:07.498623    8582 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 21:48:07.968484    8582 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 21:48:07.969574    8582 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 21:48:07.972670    8582 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 21:48:07.975281    8582 out.go:204]   - Booting up control plane ...
	I0307 21:48:07.975406    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 21:48:07.976830    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 21:48:07.978330    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 21:48:07.993081    8582 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 21:48:07.993690    8582 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 21:48:07.993765    8582 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 21:48:08.090991    8582 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 21:48:15.093676    8582 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.002736 seconds
	I0307 21:48:15.093796    8582 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 21:48:15.109385    8582 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 21:48:15.637033    8582 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 21:48:15.637227    8582 kubeadm.go:309] [mark-control-plane] Marking the node addons-963512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 21:48:16.148820    8582 kubeadm.go:309] [bootstrap-token] Using token: eitw6c.jj5bymtx8a9epad7
	I0307 21:48:16.151351    8582 out.go:204]   - Configuring RBAC rules ...
	I0307 21:48:16.151485    8582 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 21:48:16.160130    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 21:48:16.170199    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 21:48:16.173972    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 21:48:16.178568    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 21:48:16.182025    8582 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 21:48:16.196298    8582 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 21:48:16.421411    8582 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 21:48:16.565628    8582 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 21:48:16.566647    8582 kubeadm.go:309] 
	I0307 21:48:16.566716    8582 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 21:48:16.566722    8582 kubeadm.go:309] 
	I0307 21:48:16.566796    8582 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 21:48:16.566801    8582 kubeadm.go:309] 
	I0307 21:48:16.566825    8582 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 21:48:16.566881    8582 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 21:48:16.566930    8582 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 21:48:16.566934    8582 kubeadm.go:309] 
	I0307 21:48:16.566985    8582 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 21:48:16.566993    8582 kubeadm.go:309] 
	I0307 21:48:16.567038    8582 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 21:48:16.567042    8582 kubeadm.go:309] 
	I0307 21:48:16.567092    8582 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 21:48:16.567163    8582 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 21:48:16.567232    8582 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 21:48:16.567236    8582 kubeadm.go:309] 
	I0307 21:48:16.567316    8582 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 21:48:16.567389    8582 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 21:48:16.567394    8582 kubeadm.go:309] 
	I0307 21:48:16.567473    8582 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token eitw6c.jj5bymtx8a9epad7 \
	I0307 21:48:16.567572    8582 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:815daef26ee193d5c7b84bb14049831ce64ba1c53ef7a1083d48ead9a06b7cce \
	I0307 21:48:16.567592    8582 kubeadm.go:309] 	--control-plane 
	I0307 21:48:16.567596    8582 kubeadm.go:309] 
	I0307 21:48:16.567677    8582 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 21:48:16.567681    8582 kubeadm.go:309] 
	I0307 21:48:16.567759    8582 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token eitw6c.jj5bymtx8a9epad7 \
	I0307 21:48:16.568076    8582 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:815daef26ee193d5c7b84bb14049831ce64ba1c53ef7a1083d48ead9a06b7cce 
	I0307 21:48:16.571520    8582 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0307 21:48:16.571632    8582 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 21:48:16.571649    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:48:16.571656    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:48:16.574785    8582 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 21:48:16.576842    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 21:48:16.581232    8582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 21:48:16.581253    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 21:48:16.605467    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 21:48:17.548047    8582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 21:48:17.548179    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:17.548297    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-963512 minikube.k8s.io/updated_at=2024_03_07T21_48_17_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=addons-963512 minikube.k8s.io/primary=true
	I0307 21:48:17.759664    8582 ops.go:34] apiserver oom_adj: -16
	I0307 21:48:17.759787    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:18.259967    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:18.759924    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:19.260417    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:19.760673    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:20.260751    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:20.760445    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:21.260832    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:21.760608    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:22.260777    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:22.760001    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:23.260400    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:23.759941    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:24.260847    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:24.760767    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:25.259910    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:25.760598    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:26.260736    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:26.760744    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:27.259909    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:27.760625    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:28.260497    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:28.759898    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:29.259983    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:29.760642    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:30.260724    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:30.389998    8582 kubeadm.go:1106] duration metric: took 12.841869041s to wait for elevateKubeSystemPrivileges
	W0307 21:48:30.390035    8582 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 21:48:30.390043    8582 kubeadm.go:393] duration metric: took 28.380266479s to StartCluster
	I0307 21:48:30.390058    8582 settings.go:142] acquiring lock: {Name:mk6b824c86d3c8cffe443e44d2dcdf6ba75674f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:30.390172    8582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:48:30.390611    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:30.390816    8582 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 21:48:30.393517    8582 out.go:177] * Verifying Kubernetes components...
	I0307 21:48:30.390953    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 21:48:30.391133    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:48:30.391143    8582 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 21:48:30.395637    8582 addons.go:69] Setting ingress=true in profile "addons-963512"
	I0307 21:48:30.395649    8582 addons.go:69] Setting ingress-dns=true in profile "addons-963512"
	I0307 21:48:30.395665    8582 addons.go:234] Setting addon ingress-dns=true in "addons-963512"
	I0307 21:48:30.395668    8582 addons.go:234] Setting addon ingress=true in "addons-963512"
	I0307 21:48:30.395701    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.395713    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.396172    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.396199    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.396810    8582 addons.go:69] Setting inspektor-gadget=true in profile "addons-963512"
	I0307 21:48:30.396836    8582 addons.go:234] Setting addon inspektor-gadget=true in "addons-963512"
	I0307 21:48:30.396873    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.397268    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.397444    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:48:30.397678    8582 addons.go:69] Setting cloud-spanner=true in profile "addons-963512"
	I0307 21:48:30.397700    8582 addons.go:234] Setting addon cloud-spanner=true in "addons-963512"
	I0307 21:48:30.397760    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.398157    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.403321    8582 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-963512"
	I0307 21:48:30.403414    8582 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-963512"
	I0307 21:48:30.403467    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.403954    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.409620    8582 addons.go:69] Setting metrics-server=true in profile "addons-963512"
	I0307 21:48:30.409668    8582 addons.go:234] Setting addon metrics-server=true in "addons-963512"
	I0307 21:48:30.409701    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.410287    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.415450    8582 addons.go:69] Setting default-storageclass=true in profile "addons-963512"
	I0307 21:48:30.415505    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-963512"
	I0307 21:48:30.415805    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.415939    8582 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-963512"
	I0307 21:48:30.415965    8582 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-963512"
	I0307 21:48:30.416005    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.416448    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.435085    8582 addons.go:69] Setting gcp-auth=true in profile "addons-963512"
	I0307 21:48:30.436546    8582 addons.go:69] Setting registry=true in profile "addons-963512"
	I0307 21:48:30.436580    8582 addons.go:234] Setting addon registry=true in "addons-963512"
	I0307 21:48:30.436617    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.437058    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.439110    8582 mustload.go:65] Loading cluster: addons-963512
	I0307 21:48:30.439372    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:48:30.439706    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.395642    8582 addons.go:69] Setting yakd=true in profile "addons-963512"
	I0307 21:48:30.458781    8582 addons.go:234] Setting addon yakd=true in "addons-963512"
	I0307 21:48:30.458826    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.459274    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.478090    8582 addons.go:69] Setting storage-provisioner=true in profile "addons-963512"
	I0307 21:48:30.478174    8582 addons.go:234] Setting addon storage-provisioner=true in "addons-963512"
	I0307 21:48:30.478233    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.488862    8582 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 21:48:30.488443    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.480360    8582 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-963512"
	I0307 21:48:30.480374    8582 addons.go:69] Setting volumesnapshots=true in profile "addons-963512"
	I0307 21:48:30.493105    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 21:48:30.521747    8582 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 21:48:30.521821    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-963512"
	I0307 21:48:30.521914    8582 addons.go:234] Setting addon volumesnapshots=true in "addons-963512"
	I0307 21:48:30.524762    8582 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 21:48:30.524843    8582 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 21:48:30.524929    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 21:48:30.530545    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:30.545031    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.545095    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.545224    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.550189    8582 addons.go:234] Setting addon default-storageclass=true in "addons-963512"
	I0307 21:48:30.550314    8582 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 21:48:30.550355    8582 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 21:48:30.556813    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.557369    8582 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 21:48:30.557420    8582 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 21:48:30.565154    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 21:48:30.565222    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.567461    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 21:48:30.580584    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 21:48:30.584420    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 21:48:30.566237    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.567437    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 21:48:30.580517    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 21:48:30.590551    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.594826    8582 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 21:48:30.594846    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 21:48:30.594911    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.648533    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:30.650864    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 21:48:30.653252    8582 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 21:48:30.653271    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 21:48:30.653337    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.580498    8582 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 21:48:30.662369    8582 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 21:48:30.665060    8582 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 21:48:30.665082    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 21:48:30.665137    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.686763    8582 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 21:48:30.662610    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.663204    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.663234    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.701235    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.701895    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 21:48:30.721860    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 21:48:30.723919    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 21:48:30.702223    8582 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 21:48:30.724984    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.728196    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 21:48:30.736495    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 21:48:30.738697    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 21:48:30.738718    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 21:48:30.738781    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.749385    8582 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-963512"
	I0307 21:48:30.749429    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.749834    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.762586    8582 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 21:48:30.764580    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 21:48:30.764603    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 21:48:30.764668    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.796245    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 21:48:30.796425    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.804993    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 21:48:30.810783    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 21:48:30.810815    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 21:48:30.810883    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.805208    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.848253    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.863382    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.893093    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.911546    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.920753    8582 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 21:48:30.920776    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 21:48:30.920832    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.956161    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.967160    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.970777    8582 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 21:48:30.972246    8582 out.go:177]   - Using image docker.io/busybox:stable
	I0307 21:48:30.976682    8582 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 21:48:30.976698    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 21:48:30.976761    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.999969    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.000870    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.047165    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.051786    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.135551    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 21:48:31.135672    8582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 21:48:31.191037    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 21:48:31.191064    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 21:48:31.196430    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 21:48:31.248452    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 21:48:31.288364    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 21:48:31.302327    8582 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 21:48:31.302360    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 21:48:31.306458    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 21:48:31.329220    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 21:48:31.372074    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 21:48:31.372099    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 21:48:31.392882    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 21:48:31.392924    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 21:48:31.418772    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 21:48:31.418807    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 21:48:31.454099    8582 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 21:48:31.454124    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 21:48:31.457205    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 21:48:31.462937    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 21:48:31.481105    8582 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 21:48:31.481128    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 21:48:31.490215    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 21:48:31.490240    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 21:48:31.601386    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 21:48:31.601413    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 21:48:31.615768    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 21:48:31.615792    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 21:48:31.660458    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 21:48:31.660482    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 21:48:31.663742    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 21:48:31.669477    8582 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 21:48:31.669547    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 21:48:31.721459    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 21:48:31.721484    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 21:48:31.747382    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 21:48:31.747415    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 21:48:31.755688    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 21:48:31.755723    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 21:48:31.821194    8582 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 21:48:31.821227    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 21:48:31.850378    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 21:48:31.850403    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 21:48:31.898520    8582 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 21:48:31.898547    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 21:48:31.900326    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 21:48:31.900351    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 21:48:31.989085    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 21:48:31.991773    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 21:48:31.991812    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 21:48:32.024433    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 21:48:32.135477    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 21:48:32.135504    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 21:48:32.143697    8582 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 21:48:32.143722    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 21:48:32.200395    8582 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:32.200419    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 21:48:32.332848    8582 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 21:48:32.332877    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 21:48:32.481305    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:32.488964    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 21:48:32.488997    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 21:48:32.734287    8582 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 21:48:32.734317    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 21:48:32.882788    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 21:48:32.935449    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 21:48:32.935481    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 21:48:33.119058    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 21:48:33.119088    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 21:48:33.344726    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 21:48:33.344758    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 21:48:33.699597    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 21:48:33.699628    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 21:48:34.221524    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 21:48:34.221568    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 21:48:34.461681    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 21:48:34.546989    8582 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.411289824s)
	I0307 21:48:34.547129    8582 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.411549162s)
	I0307 21:48:34.547146    8582 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0307 21:48:34.548688    8582 node_ready.go:35] waiting up to 6m0s for node "addons-963512" to be "Ready" ...
	I0307 21:48:34.552488    8582 node_ready.go:49] node "addons-963512" has status "Ready":"True"
	I0307 21:48:34.552509    8582 node_ready.go:38] duration metric: took 3.797489ms for node "addons-963512" to be "Ready" ...
	I0307 21:48:34.552518    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 21:48:34.563281    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:34.931049    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.734582053s)
	I0307 21:48:35.052183    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-963512" context rescaled to 1 replicas
	I0307 21:48:35.440029    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.191541003s)
	I0307 21:48:35.440140    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.151752665s)
	I0307 21:48:36.570340    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:37.514000    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 21:48:37.514084    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:37.535542    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:37.871120    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 21:48:37.896293    8582 addons.go:234] Setting addon gcp-auth=true in "addons-963512"
	I0307 21:48:37.896343    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:37.896859    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:37.928132    8582 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 21:48:37.928190    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:37.959514    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:38.573524    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:39.386345    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.057084393s)
	I0307 21:48:39.386424    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.929196437s)
	I0307 21:48:39.386495    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.080002504s)
	I0307 21:48:39.386522    8582 addons.go:470] Verifying addon ingress=true in "addons-963512"
	I0307 21:48:39.388892    8582 out.go:177] * Verifying ingress addon...
	I0307 21:48:39.386789    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.923829453s)
	I0307 21:48:39.391600    8582 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 21:48:39.386893    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.723120499s)
	I0307 21:48:39.391813    8582 addons.go:470] Verifying addon registry=true in "addons-963512"
	I0307 21:48:39.386999    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.362532385s)
	I0307 21:48:39.387084    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.905746539s)
	I0307 21:48:39.387138    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.504321607s)
	I0307 21:48:39.386944    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.397833342s)
	I0307 21:48:39.393744    8582 addons.go:470] Verifying addon metrics-server=true in "addons-963512"
	I0307 21:48:39.393768    8582 out.go:177] * Verifying registry addon...
	I0307 21:48:39.396857    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0307 21:48:39.393872    8582 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 21:48:39.397066    8582 retry.go:31] will retry after 370.790466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 21:48:39.399364    8582 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-963512 service yakd-dashboard -n yakd-dashboard
	
	I0307 21:48:39.401493    8582 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 21:48:39.403709    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:39.409106    8582 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 21:48:39.409135    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0307 21:48:39.427071    8582 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0307 21:48:39.768356    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:39.896907    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:39.902127    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:40.483332    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:40.493009    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:40.572772    8582 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.644606371s)
	I0307 21:48:40.574636    8582 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 21:48:40.572900    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.111016973s)
	I0307 21:48:40.577296    8582 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-963512"
	I0307 21:48:40.582047    8582 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 21:48:40.584527    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:40.594825    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 21:48:40.594856    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 21:48:40.585376    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 21:48:40.635152    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:40.636580    8582 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 21:48:40.636601    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:40.682833    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 21:48:40.682866    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 21:48:40.743342    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 21:48:40.743366    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 21:48:40.800851    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 21:48:40.901328    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:40.909439    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:41.102010    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:41.396876    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:41.402359    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:41.601375    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:41.923616    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:41.928365    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:42.148319    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:42.316116    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.515204539s)
	I0307 21:48:42.316266    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.54787463s)
	I0307 21:48:42.319786    8582 addons.go:470] Verifying addon gcp-auth=true in "addons-963512"
	I0307 21:48:42.323523    8582 out.go:177] * Verifying gcp-auth addon...
	I0307 21:48:42.326486    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 21:48:42.330283    8582 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 21:48:42.330324    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:42.400157    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:42.405315    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:42.600884    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:42.830866    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:42.895948    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:42.901166    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:43.069916    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:43.101161    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:43.331072    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:43.396400    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:43.402156    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:43.601017    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:43.830885    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:43.896924    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:43.901083    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:44.101418    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:44.348979    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:44.396196    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:44.401923    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:44.601034    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:44.837997    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:44.896486    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:44.903659    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:45.073070    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:45.104059    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:45.332005    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:45.397995    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:45.401702    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:45.601485    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:45.830389    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:45.896664    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:45.903197    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:46.102404    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:46.332649    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:46.396231    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:46.401849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:46.601080    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:46.830202    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:46.896510    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:46.901174    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:47.076645    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:47.101314    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:47.330807    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:47.398037    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:47.408496    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:47.600776    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:47.833217    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:47.926092    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:47.952620    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:48.070676    8582 pod_ready.go:92] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.070712    8582 pod_ready.go:81] duration metric: took 13.50738532s for pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.070726    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.073196    8582 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5mrbg" not found
	I0307 21:48:48.073227    8582 pod_ready.go:81] duration metric: took 2.493175ms for pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace to be "Ready" ...
	E0307 21:48:48.073240    8582 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5mrbg" not found
	I0307 21:48:48.073248    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.080393    8582 pod_ready.go:92] pod "etcd-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.080420    8582 pod_ready.go:81] duration metric: took 7.154028ms for pod "etcd-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.080445    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.087501    8582 pod_ready.go:92] pod "kube-apiserver-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.087527    8582 pod_ready.go:81] duration metric: took 7.074266ms for pod "kube-apiserver-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.087550    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.095350    8582 pod_ready.go:92] pod "kube-controller-manager-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.095375    8582 pod_ready.go:81] duration metric: took 7.815982ms for pod "kube-controller-manager-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.095388    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w6gxd" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.104106    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:48.269007    8582 pod_ready.go:92] pod "kube-proxy-w6gxd" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.269043    8582 pod_ready.go:81] duration metric: took 173.64734ms for pod "kube-proxy-w6gxd" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.269060    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.330555    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:48.396609    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:48.402416    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:48.602197    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:48.667167    8582 pod_ready.go:92] pod "kube-scheduler-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.667194    8582 pod_ready.go:81] duration metric: took 398.125604ms for pod "kube-scheduler-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.667207    8582 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.830978    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:48.896699    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:48.902063    8582 kapi.go:107] duration metric: took 9.505201603s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 21:48:49.101100    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:49.330309    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:49.396695    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:49.600901    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:49.830739    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:49.897976    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:50.101815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:50.331264    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:50.396803    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:50.600547    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:50.674209    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:50.834067    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:50.896820    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:51.101424    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:51.331314    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:51.397163    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:51.601217    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:51.830742    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:51.897498    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:52.101659    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:52.330932    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:52.396547    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:52.601178    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:52.830645    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:52.905502    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:53.100995    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:53.173655    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:53.330754    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:53.396473    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:53.600837    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:53.830123    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:53.897378    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:54.102737    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:54.332054    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:54.401279    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:54.601015    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:54.830967    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:54.896467    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:55.102651    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:55.175042    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:55.331036    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:55.396791    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:55.600719    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:55.830408    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:55.896081    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:56.100791    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:56.330299    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:56.397162    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:56.600557    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:56.830604    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:56.899747    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:57.101407    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:57.329764    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:57.395828    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:57.602554    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:57.678994    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:57.844672    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:57.896816    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:58.104640    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:58.335382    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:58.396749    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:58.600712    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:58.830815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:58.896705    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:59.101591    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:59.331180    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:59.396675    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:59.605219    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:59.830611    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:59.896122    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:00.193413    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:00.201499    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:00.358309    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:00.397929    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:00.602511    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:00.831453    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:00.896085    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:01.101624    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:01.330525    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:01.396163    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:01.601630    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:01.831098    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:01.895967    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:02.101434    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:02.330804    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:02.397742    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:02.601765    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:02.676495    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:02.831701    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:02.898351    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:03.101861    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:03.332995    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:03.396014    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:03.602665    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:03.831156    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:03.896768    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:04.102004    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:04.331593    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:04.398215    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:04.600601    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:04.830297    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:04.896360    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:05.104204    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:05.182100    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:05.330570    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:05.396168    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:05.601490    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:05.830347    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:05.896477    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:06.101903    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:06.330571    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:06.396059    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:06.600599    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:06.830167    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:06.896850    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:07.111928    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:07.334848    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:07.396519    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:07.601440    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:07.673790    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:07.830045    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:07.896111    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:08.102892    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:08.330179    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:08.396035    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:08.601175    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:08.830469    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:08.903761    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:09.101511    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:09.330673    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:09.397092    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:09.601173    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:09.831384    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:09.898064    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:10.101041    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:10.175266    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:10.331101    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:10.395858    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:10.600129    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:10.831052    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:10.896190    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:11.102094    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:11.331464    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:11.396670    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:11.601137    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:11.830479    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:11.897054    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:12.101178    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:12.178607    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:12.332421    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:12.402977    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:12.601568    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:12.830472    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:12.897365    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:13.100904    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:13.331149    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:13.396523    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:13.602170    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:13.830240    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:13.897246    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:14.101054    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:14.330031    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:14.395936    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:14.600439    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:14.673705    8582 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"True"
	I0307 21:49:14.673734    8582 pod_ready.go:81] duration metric: took 26.006519063s for pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace to be "Ready" ...
	I0307 21:49:14.673745    8582 pod_ready.go:38] duration metric: took 40.121215117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 21:49:14.673759    8582 api_server.go:52] waiting for apiserver process to appear ...
	I0307 21:49:14.673817    8582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 21:49:14.693989    8582 api_server.go:72] duration metric: took 44.303134752s to wait for apiserver process to appear ...
	I0307 21:49:14.694012    8582 api_server.go:88] waiting for apiserver healthz status ...
	I0307 21:49:14.694032    8582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0307 21:49:14.702391    8582 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0307 21:49:14.703645    8582 api_server.go:141] control plane version: v1.28.4
	I0307 21:49:14.703667    8582 api_server.go:131] duration metric: took 9.648294ms to wait for apiserver health ...
	I0307 21:49:14.703689    8582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 21:49:14.714899    8582 system_pods.go:59] 18 kube-system pods found
	I0307 21:49:14.714934    8582 system_pods.go:61] "coredns-5dd5756b68-29fkp" [f8bab481-83f1-4618-a41d-c1f4f52609ab] Running
	I0307 21:49:14.714942    8582 system_pods.go:61] "csi-hostpath-attacher-0" [ae25642c-4c09-4ba4-986a-fc0894df39a3] Running
	I0307 21:49:14.714946    8582 system_pods.go:61] "csi-hostpath-resizer-0" [4e28b525-6966-4a98-9c3d-b3bc7b156676] Running
	I0307 21:49:14.714954    8582 system_pods.go:61] "csi-hostpathplugin-7rl2z" [f5001b06-75f0-4446-b62f-e7de26581524] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 21:49:14.714960    8582 system_pods.go:61] "etcd-addons-963512" [6b9f7896-fb74-4fcf-ae8a-65f8a002118e] Running
	I0307 21:49:14.714965    8582 system_pods.go:61] "kindnet-ch46s" [92a51680-a4d1-4636-b15d-185301519096] Running
	I0307 21:49:14.714970    8582 system_pods.go:61] "kube-apiserver-addons-963512" [2618ff7b-95ab-4a4a-9343-652bbd3c76c7] Running
	I0307 21:49:14.714974    8582 system_pods.go:61] "kube-controller-manager-addons-963512" [5ca9551f-1d7e-4be1-ae8b-67c3d148f01f] Running
	I0307 21:49:14.714980    8582 system_pods.go:61] "kube-ingress-dns-minikube" [9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 21:49:14.714984    8582 system_pods.go:61] "kube-proxy-w6gxd" [7fead251-876a-464c-94cf-c12167cd82af] Running
	I0307 21:49:14.714989    8582 system_pods.go:61] "kube-scheduler-addons-963512" [6b4ff72e-13fe-4452-b503-74efebe5a751] Running
	I0307 21:49:14.714993    8582 system_pods.go:61] "metrics-server-69cf46c98-qzrq2" [026e1c6f-54a2-4f1b-83ab-9b6f24976fe3] Running
	I0307 21:49:14.714997    8582 system_pods.go:61] "nvidia-device-plugin-daemonset-skr6t" [bd96c754-234b-4da9-a225-d2510af33519] Running
	I0307 21:49:14.715001    8582 system_pods.go:61] "registry-g5s9h" [5acdf3e9-cdb0-4e0e-82bc-ce557a05d53f] Running
	I0307 21:49:14.715004    8582 system_pods.go:61] "registry-proxy-pn5c9" [0bc81ee0-ca94-4da6-9aa2-210e4709466d] Running
	I0307 21:49:14.715008    8582 system_pods.go:61] "snapshot-controller-58dbcc7b99-bqr6j" [67928c3e-8f43-49cf-b47f-8d32a9d92f23] Running
	I0307 21:49:14.715012    8582 system_pods.go:61] "snapshot-controller-58dbcc7b99-qjd59" [6b4b94ea-e889-4004-9dea-98d277831414] Running
	I0307 21:49:14.715016    8582 system_pods.go:61] "storage-provisioner" [60f33f3b-dabc-4b3a-a17f-ff46ff70601d] Running
	I0307 21:49:14.715022    8582 system_pods.go:74] duration metric: took 11.320976ms to wait for pod list to return data ...
	I0307 21:49:14.715033    8582 default_sa.go:34] waiting for default service account to be created ...
	I0307 21:49:14.717631    8582 default_sa.go:45] found service account: "default"
	I0307 21:49:14.717655    8582 default_sa.go:55] duration metric: took 2.609294ms for default service account to be created ...
	I0307 21:49:14.717664    8582 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 21:49:14.728649    8582 system_pods.go:86] 18 kube-system pods found
	I0307 21:49:14.728684    8582 system_pods.go:89] "coredns-5dd5756b68-29fkp" [f8bab481-83f1-4618-a41d-c1f4f52609ab] Running
	I0307 21:49:14.728692    8582 system_pods.go:89] "csi-hostpath-attacher-0" [ae25642c-4c09-4ba4-986a-fc0894df39a3] Running
	I0307 21:49:14.728697    8582 system_pods.go:89] "csi-hostpath-resizer-0" [4e28b525-6966-4a98-9c3d-b3bc7b156676] Running
	I0307 21:49:14.728706    8582 system_pods.go:89] "csi-hostpathplugin-7rl2z" [f5001b06-75f0-4446-b62f-e7de26581524] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 21:49:14.728712    8582 system_pods.go:89] "etcd-addons-963512" [6b9f7896-fb74-4fcf-ae8a-65f8a002118e] Running
	I0307 21:49:14.728717    8582 system_pods.go:89] "kindnet-ch46s" [92a51680-a4d1-4636-b15d-185301519096] Running
	I0307 21:49:14.728721    8582 system_pods.go:89] "kube-apiserver-addons-963512" [2618ff7b-95ab-4a4a-9343-652bbd3c76c7] Running
	I0307 21:49:14.728726    8582 system_pods.go:89] "kube-controller-manager-addons-963512" [5ca9551f-1d7e-4be1-ae8b-67c3d148f01f] Running
	I0307 21:49:14.728733    8582 system_pods.go:89] "kube-ingress-dns-minikube" [9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 21:49:14.728740    8582 system_pods.go:89] "kube-proxy-w6gxd" [7fead251-876a-464c-94cf-c12167cd82af] Running
	I0307 21:49:14.728745    8582 system_pods.go:89] "kube-scheduler-addons-963512" [6b4ff72e-13fe-4452-b503-74efebe5a751] Running
	I0307 21:49:14.728753    8582 system_pods.go:89] "metrics-server-69cf46c98-qzrq2" [026e1c6f-54a2-4f1b-83ab-9b6f24976fe3] Running
	I0307 21:49:14.728757    8582 system_pods.go:89] "nvidia-device-plugin-daemonset-skr6t" [bd96c754-234b-4da9-a225-d2510af33519] Running
	I0307 21:49:14.728761    8582 system_pods.go:89] "registry-g5s9h" [5acdf3e9-cdb0-4e0e-82bc-ce557a05d53f] Running
	I0307 21:49:14.728772    8582 system_pods.go:89] "registry-proxy-pn5c9" [0bc81ee0-ca94-4da6-9aa2-210e4709466d] Running
	I0307 21:49:14.728778    8582 system_pods.go:89] "snapshot-controller-58dbcc7b99-bqr6j" [67928c3e-8f43-49cf-b47f-8d32a9d92f23] Running
	I0307 21:49:14.728784    8582 system_pods.go:89] "snapshot-controller-58dbcc7b99-qjd59" [6b4b94ea-e889-4004-9dea-98d277831414] Running
	I0307 21:49:14.728794    8582 system_pods.go:89] "storage-provisioner" [60f33f3b-dabc-4b3a-a17f-ff46ff70601d] Running
	I0307 21:49:14.728800    8582 system_pods.go:126] duration metric: took 11.131685ms to wait for k8s-apps to be running ...
	I0307 21:49:14.728807    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 21:49:14.728864    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 21:49:14.745263    8582 system_svc.go:56] duration metric: took 16.446977ms WaitForService to wait for kubelet
	I0307 21:49:14.745292    8582 kubeadm.go:576] duration metric: took 44.354441105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 21:49:14.745310    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0307 21:49:14.748485    8582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0307 21:49:14.748513    8582 node_conditions.go:123] node cpu capacity is 2
	I0307 21:49:14.748525    8582 node_conditions.go:105] duration metric: took 3.209963ms to run NodePressure ...
	I0307 21:49:14.748539    8582 start.go:240] waiting for startup goroutines ...
	I0307 21:49:14.836849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:14.897631    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:15.101204    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:15.333797    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:15.397070    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:15.600875    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:15.831268    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:15.896861    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:16.101131    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:16.330606    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:16.401334    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:16.600920    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:16.830247    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:16.897935    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:17.100593    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:17.331207    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:17.396950    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:17.601797    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:17.830522    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:17.896826    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:18.109248    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:18.332316    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:18.396530    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:18.600800    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:18.830422    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:18.896557    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:19.101508    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:19.331066    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:19.395580    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:19.601130    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:19.831472    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:19.898106    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:20.102219    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:20.335130    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:20.398399    8582 kapi.go:107] duration metric: took 41.006796118s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 21:49:20.601523    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:20.830490    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:21.101125    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:21.331539    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:21.601094    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:21.831815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:22.101837    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:22.330849    8582 kapi.go:107] duration metric: took 40.004360558s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 21:49:22.332575    8582 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-963512 cluster.
	I0307 21:49:22.334680    8582 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 21:49:22.336493    8582 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 21:49:22.601312    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:23.102082    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:23.601012    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:24.104931    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:24.600239    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:25.115647    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:25.601391    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:26.100849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:26.603245    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:27.101534    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:27.601303    8582 kapi.go:107] duration metric: took 47.015922702s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 21:49:27.603818    8582 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0307 21:49:27.606247    8582 addons.go:505] duration metric: took 57.215094396s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0307 21:49:27.606296    8582 start.go:245] waiting for cluster config update ...
	I0307 21:49:27.606315    8582 start.go:254] writing updated cluster config ...
	I0307 21:49:27.606997    8582 ssh_runner.go:195] Run: rm -f paused
	I0307 21:49:27.943787    8582 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 21:49:27.946985    8582 out.go:177] * Done! kubectl is now configured to use "addons-963512" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	3ea9e182c1b34       dd1b12fcb6097       9 seconds ago        Exited              hello-world-app              2                   1b6e54802d428       hello-world-app-5d77478584-frj9x
	4b5ce7ca15eab       be5e6f23a9904       32 seconds ago       Running             nginx                        0                   0e97238db2926       nginx
	3a90d0e29e456       bafe72500920c       About a minute ago   Running             gcp-auth                     0                   206a4bf8e18c1       gcp-auth-5f6b4f85fd-9g9vb
	30d5e600e3f06       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr     0                   56185c8410ca1       nvidia-device-plugin-daemonset-skr6t
	a24e7358e4b27       1a024e390dd05       About a minute ago   Exited              patch                        1                   eb3bb03f287ec       ingress-nginx-admission-patch-mv9ww
	5e9124a23573d       1a024e390dd05       About a minute ago   Exited              create                       0                   b08b4cbd6bb77       ingress-nginx-admission-create-gn8tt
	7c00de7883860       7ce2150c8929b       About a minute ago   Running             local-path-provisioner       0                   3bf5080f6b9e6       local-path-provisioner-78b46b4d5c-tcxbn
	a476486d2a80a       20e3f2db01e81       About a minute ago   Running             yakd                         0                   92db516cf69fb       yakd-dashboard-9947fc6bf-dw8s7
	2b23d0b44dd54       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   94919244f42d5       snapshot-controller-58dbcc7b99-bqr6j
	cc95bc26ba37b       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   211b3ebb631e5       snapshot-controller-58dbcc7b99-qjd59
	bae1f86f699e7       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator       0                   1c53186494ddc       cloud-spanner-emulator-6548d5df46-c4d87
	2a9c63a0cc630       97e04611ad434       About a minute ago   Running             coredns                      0                   f1603b63aad7f       coredns-5dd5756b68-29fkp
	79369d5a315db       ba04bb24b9575       2 minutes ago        Running             storage-provisioner          0                   ea35e45248780       storage-provisioner
	9733620690c83       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                  0                   7a93b69dacb02       kindnet-ch46s
	122c40b47440e       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                   0                   034d2ca9d7a01       kube-proxy-w6gxd
	971300809d6dc       9961cbceaf234       2 minutes ago        Running             kube-controller-manager      0                   0134843e7b9b9       kube-controller-manager-addons-963512
	c93588c0a8cce       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver               0                   885e346178767       kube-apiserver-addons-963512
	a122979188eac       05c284c929889       2 minutes ago        Running             kube-scheduler               0                   5f1b15db14af8       kube-scheduler-addons-963512
	920a86d849da2       9cdd6470f48c8       2 minutes ago        Running             etcd                         0                   2c43d971ea471       etcd-addons-963512
	
	
	==> containerd <==
	Mar 07 21:50:29 addons-963512 containerd[764]: time="2024-03-07T21:50:29.702900950Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:29 addons-963512 containerd[764]: time="2024-03-07T21:50:29.711869379Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8897 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.628034993Z" level=info msg="RemoveContainer for \"70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3\""
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.635012998Z" level=info msg="RemoveContainer for \"70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3\" returns successfully"
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.640240782Z" level=info msg="RemoveContainer for \"8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894\""
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.652970533Z" level=info msg="RemoveContainer for \"8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894\" returns successfully"
	Mar 07 21:50:32 addons-963512 containerd[764]: time="2024-03-07T21:50:32.330035877Z" level=info msg="StopContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" with timeout 2 (s)"
	Mar 07 21:50:32 addons-963512 containerd[764]: time="2024-03-07T21:50:32.330755168Z" level=info msg="Stop container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" with signal terminated"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.336969648Z" level=info msg="Kill container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402356214Z" level=info msg="shim disconnected" id=fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402434343Z" level=warning msg="cleaning up after shim disconnected" id=fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b namespace=k8s.io
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402448152Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.410854155Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9035 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.413622143Z" level=info msg="StopContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.414226932Z" level=info msg="StopPodSandbox for \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.414284245Z" level=info msg="Container to stop \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446376013Z" level=info msg="shim disconnected" id=6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446445059Z" level=warning msg="cleaning up after shim disconnected" id=6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e namespace=k8s.io
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446457013Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.454448282Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9067 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.500141509Z" level=info msg="TearDown network for sandbox \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\" successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.500194932Z" level=info msg="StopPodSandbox for \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.647591739Z" level=info msg="RemoveContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.653194251Z" level=info msg="RemoveContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.653713461Z" level=error msg="ContainerStatus for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found"
	
	
	==> coredns [2a9c63a0cc63048424bec8ba57f42b8cb9041975eefe514e6cb37a648b93960d] <==
	[INFO] 10.244.0.19:36288 - 3680 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000221398s
	[INFO] 10.244.0.19:56502 - 21815 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002681441s
	[INFO] 10.244.0.19:36288 - 16475 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002725527s
	[INFO] 10.244.0.19:56502 - 23750 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002559086s
	[INFO] 10.244.0.19:36288 - 42492 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001481495s
	[INFO] 10.244.0.19:36288 - 22817 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000222752s
	[INFO] 10.244.0.19:56502 - 38253 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081075s
	[INFO] 10.244.0.19:39421 - 53845 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00013723s
	[INFO] 10.244.0.19:38203 - 51947 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000244011s
	[INFO] 10.244.0.19:39421 - 30606 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00022116s
	[INFO] 10.244.0.19:39421 - 55212 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131626s
	[INFO] 10.244.0.19:38203 - 41667 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000193854s
	[INFO] 10.244.0.19:38203 - 60606 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118924s
	[INFO] 10.244.0.19:39421 - 23545 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000145287s
	[INFO] 10.244.0.19:39421 - 10032 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000137238s
	[INFO] 10.244.0.19:38203 - 56967 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000126383s
	[INFO] 10.244.0.19:39421 - 26792 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073468s
	[INFO] 10.244.0.19:38203 - 2566 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000158916s
	[INFO] 10.244.0.19:38203 - 17146 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000105042s
	[INFO] 10.244.0.19:39421 - 58842 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001501269s
	[INFO] 10.244.0.19:38203 - 54925 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001524933s
	[INFO] 10.244.0.19:39421 - 26355 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001449372s
	[INFO] 10.244.0.19:39421 - 9046 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047573s
	[INFO] 10.244.0.19:38203 - 21924 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002268773s
	[INFO] 10.244.0.19:38203 - 30052 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048385s
	
	
	==> describe nodes <==
	Name:               addons-963512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-963512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=addons-963512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T21_48_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-963512
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 21:48:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-963512
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 21:50:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-963512
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb4fa9a1690241dd8c2e79b58eaaf79a
	  System UUID:                1874d978-b1e3-427a-849f-c838a3c17338
	  Boot ID:                    5a38287e-066f-43b8-a303-a60cdb318f8a
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-c4d87    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  default                     hello-world-app-5d77478584-frj9x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  gcp-auth                    gcp-auth-5f6b4f85fd-9g9vb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 coredns-5dd5756b68-29fkp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m9s
	  kube-system                 etcd-addons-963512                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m23s
	  kube-system                 kindnet-ch46s                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m9s
	  kube-system                 kube-apiserver-addons-963512               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-controller-manager-addons-963512      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m23s
	  kube-system                 kube-proxy-w6gxd                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-scheduler-addons-963512               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 nvidia-device-plugin-daemonset-skr6t       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 snapshot-controller-58dbcc7b99-bqr6j       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 snapshot-controller-58dbcc7b99-qjd59       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  local-path-storage          local-path-provisioner-78b46b4d5c-tcxbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-dw8s7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m6s   kube-proxy       
	  Normal  Starting                 2m23s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s  kubelet          Node addons-963512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s  kubelet          Node addons-963512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s  kubelet          Node addons-963512 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m23s  kubelet          Node addons-963512 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m23s  kubelet          Node addons-963512 status is now: NodeReady
	  Normal  RegisteredNode           2m9s   node-controller  Node addons-963512 event: Registered Node addons-963512 in Controller
	
	
	==> dmesg <==
	[Mar 7 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015962] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.450159] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002995] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004021] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.054186] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004024] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.715890] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.628665] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [920a86d849da2798ccea8cfd1b820820179d081b0c0572d8287ceb2be6fc9b6b] <==
	{"level":"info","ts":"2024-03-07T21:48:09.566821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-07T21:48:09.56689Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-07T21:48:09.567894Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T21:48:09.56832Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T21:48:09.568337Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T21:48:09.568649Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T21:48:09.568673Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T21:48:10.460316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.46054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.460579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.46062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.468456Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-963512 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T21:48:10.468562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:48:10.469616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T21:48:10.469853Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.472232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:48:10.473218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-07T21:48:10.484437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.484684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.488232Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.484723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T21:48:10.536488Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [3a90d0e29e4564c71421581edfef3805537dcad2b4682ebabe006499f46f7e07] <==
	2024/03/07 21:49:21 GCP Auth Webhook started!
	2024/03/07 21:49:39 Ready to marshal response ...
	2024/03/07 21:49:39 Ready to write response ...
	2024/03/07 21:50:01 Ready to marshal response ...
	2024/03/07 21:50:01 Ready to write response ...
	2024/03/07 21:50:01 Ready to marshal response ...
	2024/03/07 21:50:01 Ready to write response ...
	2024/03/07 21:50:13 Ready to marshal response ...
	2024/03/07 21:50:13 Ready to write response ...
	2024/03/07 21:50:18 Ready to marshal response ...
	2024/03/07 21:50:18 Ready to write response ...
	
	
	==> kernel <==
	 21:50:39 up 33 min,  0 users,  load average: 1.64, 1.04, 0.43
	Linux addons-963512 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [9733620690c83e7bf34ee884f9c825bad94e72799a184a757612dc15d19b71b1] <==
	I0307 21:48:34.911341       1 main.go:227] handling current node
	I0307 21:48:44.918023       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:48:44.918051       1 main.go:227] handling current node
	I0307 21:48:54.930169       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:48:54.930193       1 main.go:227] handling current node
	I0307 21:49:04.939610       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:04.939636       1 main.go:227] handling current node
	I0307 21:49:14.951656       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:14.951686       1 main.go:227] handling current node
	I0307 21:49:24.964106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:24.964137       1 main.go:227] handling current node
	I0307 21:49:34.973681       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:34.973709       1 main.go:227] handling current node
	I0307 21:49:44.984748       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:44.984776       1 main.go:227] handling current node
	I0307 21:49:54.996887       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:54.997086       1 main.go:227] handling current node
	I0307 21:50:05.006867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:05.006909       1 main.go:227] handling current node
	I0307 21:50:15.022985       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:15.023015       1 main.go:227] handling current node
	I0307 21:50:25.035785       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:25.035821       1 main.go:227] handling current node
	I0307 21:50:35.047631       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:35.047664       1 main.go:227] handling current node
	
	
	==> kube-apiserver [c93588c0a8cce89a2f9d23a8ebdffd7238817b290581737e0684d75181f2a45c] <==
	I0307 21:48:40.119420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.222.42"}
	I0307 21:48:40.151455       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	W0307 21:48:40.256850       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 21:48:40.343102       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.228.59"}
	W0307 21:48:41.139600       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 21:48:41.723797       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.206.167"}
	E0307 21:49:04.089430       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	W0307 21:49:04.089817       1 handler_proxy.go:93] no RequestInfo found in the context
	E0307 21:49:04.089964       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0307 21:49:04.091745       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	I0307 21:49:04.092030       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0307 21:49:04.097337       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	I0307 21:49:04.208646       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0307 21:49:13.319888       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0307 21:49:55.880736       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0307 21:49:55.888843       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 21:49:56.912565       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 21:50:01.523381       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 21:50:01.884133       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.120.104"}
	I0307 21:50:05.108455       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0307 21:50:12.791143       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0307 21:50:13.662880       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.97.144"}
	
	
	==> kube-controller-manager [971300809d6dc1c69864b575af34a12e82c177822af19738bd95669b6ddf21d1] <==
	W0307 21:50:04.226711       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:04.226750       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:06.000547       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0307 21:50:12.587955       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:12.587988       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:13.392624       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0307 21:50:13.411403       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-frj9x"
	I0307 21:50:13.431907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.077275ms"
	I0307 21:50:13.497035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.077332ms"
	I0307 21:50:13.541928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.839052ms"
	I0307 21:50:13.542233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.348µs"
	I0307 21:50:14.963884       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:15.131128       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:16.540974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.78µs"
	I0307 21:50:17.550549       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:17.576639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.734µs"
	I0307 21:50:18.566137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.605µs"
	I0307 21:50:28.057918       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0307 21:50:28.169495       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0307 21:50:29.093458       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:29.093494       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:30.662073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.872µs"
	I0307 21:50:31.302178       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 21:50:31.309590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="7.672µs"
	I0307 21:50:31.312232       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [122c40b47440e522ae9cdd9b8a1a45cab186bbcd20d893bb0dbe24c36c35a9a2] <==
	I0307 21:48:32.831030       1 server_others.go:69] "Using iptables proxy"
	I0307 21:48:32.849505       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0307 21:48:32.934192       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0307 21:48:32.945014       1 server_others.go:152] "Using iptables Proxier"
	I0307 21:48:32.945051       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0307 21:48:32.945059       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0307 21:48:32.945090       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 21:48:32.945296       1 server.go:846] "Version info" version="v1.28.4"
	I0307 21:48:32.945306       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:48:32.947456       1 config.go:188] "Starting service config controller"
	I0307 21:48:32.947471       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 21:48:32.947505       1 config.go:97] "Starting endpoint slice config controller"
	I0307 21:48:32.947511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 21:48:32.949775       1 config.go:315] "Starting node config controller"
	I0307 21:48:32.949787       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 21:48:33.048438       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 21:48:33.048494       1 shared_informer.go:318] Caches are synced for service config
	I0307 21:48:33.049834       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a122979188eacd9de5ed8cadad536037c22516ea1fd49bb0f9b0981e82e62f82] <==
	W0307 21:48:14.335392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 21:48:14.335425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 21:48:14.335473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 21:48:14.335490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 21:48:14.335530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 21:48:14.335548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 21:48:14.335583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.335599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.337807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 21:48:14.337840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 21:48:14.337907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 21:48:14.337933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 21:48:14.340016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.340044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.340136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 21:48:14.340157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 21:48:14.340214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.340231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.340301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 21:48:14.340323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 21:48:14.342956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.343134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.343293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 21:48:14.343341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0307 21:48:15.726000       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.769345    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wc9j\" (UniqueName: \"kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j\") pod \"9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad\" (UID: \"9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad\") "
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.771353    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j" (OuterVolumeSpecName: "kube-api-access-4wc9j") pod "9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad" (UID: "9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad"). InnerVolumeSpecName "kube-api-access-4wc9j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.869763    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4wc9j\" (UniqueName: \"kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.541702    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e28b525-6966-4a98-9c3d-b3bc7b156676" path="/var/lib/kubelet/pods/4e28b525-6966-4a98-9c3d-b3bc7b156676/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.542098    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ae25642c-4c09-4ba4-986a-fc0894df39a3" path="/var/lib/kubelet/pods/ae25642c-4c09-4ba4-986a-fc0894df39a3/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.542448    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f5001b06-75f0-4446-b62f-e7de26581524" path="/var/lib/kubelet/pods/f5001b06-75f0-4446-b62f-e7de26581524/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.625891    1478 scope.go:117] "RemoveContainer" containerID="70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.632165    1478 scope.go:117] "RemoveContainer" containerID="3ea9e182c1b3426f0d101b1a4d7d49a809df97812d487e0912b3075dac42f767"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: E0307 21:50:30.632520    1478 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-frj9x_default(b0811f24-911e-472f-aa20-a0f6dffff240)\"" pod="default/hello-world-app-5d77478584-frj9x" podUID="b0811f24-911e-472f-aa20-a0f6dffff240"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.636457    1478 scope.go:117] "RemoveContainer" containerID="8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894"
	Mar 07 21:50:31 addons-963512 kubelet[1478]: I0307 21:50:31.538366    1478 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-skr6t" secret="" err="secret \"gcp-auth\" not found"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.541401    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d45f7a9-05f2-418c-9f70-99b1c2bfa121" path="/var/lib/kubelet/pods/1d45f7a9-05f2-418c-9f70-99b1c2bfa121/volumes"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.541853    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="36f3af08-b59e-40ba-9a39-695007d9cb26" path="/var/lib/kubelet/pods/36f3af08-b59e-40ba-9a39-695007d9cb26/volumes"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.542243    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad" path="/var/lib/kubelet/pods/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad/volumes"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.608211    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhsm\" (UniqueName: \"kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm\") pod \"2cb806f5-6cec-4632-874a-a41175208135\" (UID: \"2cb806f5-6cec-4632-874a-a41175208135\") "
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.608684    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert\") pod \"2cb806f5-6cec-4632-874a-a41175208135\" (UID: \"2cb806f5-6cec-4632-874a-a41175208135\") "
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.610481    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm" (OuterVolumeSpecName: "kube-api-access-swhsm") pod "2cb806f5-6cec-4632-874a-a41175208135" (UID: "2cb806f5-6cec-4632-874a-a41175208135"). InnerVolumeSpecName "kube-api-access-swhsm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.613047    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2cb806f5-6cec-4632-874a-a41175208135" (UID: "2cb806f5-6cec-4632-874a-a41175208135"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.645130    1478 scope.go:117] "RemoveContainer" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.653444    1478 scope.go:117] "RemoveContainer" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: E0307 21:50:34.653895    1478 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.653945    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"} err="failed to get container status \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.709325    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swhsm\" (UniqueName: \"kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.709367    1478 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:36 addons-963512 kubelet[1478]: I0307 21:50:36.542285    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2cb806f5-6cec-4632-874a-a41175208135" path="/var/lib/kubelet/pods/2cb806f5-6cec-4632-874a-a41175208135/volumes"
	
	
	==> storage-provisioner [79369d5a315db50e2a00e1987fc2f9372358f87dcf1ff0177ac1f3a1d9f09159] <==
	I0307 21:48:38.179571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 21:48:38.202176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 21:48:38.202224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 21:48:38.213360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 21:48:38.215872       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327!
	I0307 21:48:38.215945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce7f5935-4b9c-4b6b-b5e5-91cca2dff8fa", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327 became leader
	I0307 21:48:38.319625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-963512 -n addons-963512
helpers_test.go:261: (dbg) Run:  kubectl --context addons-963512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (39.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 43.220687ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-963512 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-963512 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d1c1fef-962f-463d-a22c-3fc8b1aaaaff] Pending
helpers_test.go:344: "task-pv-pod" [3d1c1fef-962f-463d-a22c-3fc8b1aaaaff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d1c1fef-962f-463d-a22c-3fc8b1aaaaff] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004667545s
addons_test.go:584: (dbg) Run:  kubectl --context addons-963512 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-963512 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-963512 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-963512 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-963512 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-963512 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-963512 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [271a9150-c6e2-4159-adb4-2c4f011c7612] Pending
helpers_test.go:344: "task-pv-pod-restore" [271a9150-c6e2-4159-adb4-2c4f011c7612] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [271a9150-c6e2-4159-adb4-2c4f011c7612] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003398168s
addons_test.go:626: (dbg) Run:  kubectl --context addons-963512 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-963512 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-963512 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.804732777s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-963512 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (755.219263ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 21:50:34.167392   19040 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:50:34.168032   19040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:50:34.168070   19040 out.go:304] Setting ErrFile to fd 2...
	I0307 21:50:34.168092   19040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:50:34.168409   19040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:50:34.168725   19040 mustload.go:65] Loading cluster: addons-963512
	I0307 21:50:34.169208   19040 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:50:34.169254   19040 addons.go:597] checking whether the cluster is paused
	I0307 21:50:34.169390   19040 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:50:34.169425   19040 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:50:34.169950   19040 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:50:34.186830   19040 ssh_runner.go:195] Run: systemctl --version
	I0307 21:50:34.186883   19040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:50:34.203349   19040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:50:34.296423   19040 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 21:50:34.296497   19040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 21:50:34.334394   19040 cri.go:89] found id: "30d5e600e3f0677e6f5fa0aaaf1493e3322cfe409cee8c2ae8f08f8a7997e880"
	I0307 21:50:34.334417   19040 cri.go:89] found id: "2b23d0b44dd5460949acac4c7f7c19fa5a70b5ad7f7d913a4673eef121db994a"
	I0307 21:50:34.334423   19040 cri.go:89] found id: "cc95bc26ba37be2b1fe54af13c0ff9894610fb48bb427b7c7ee89bdc5fadaeb0"
	I0307 21:50:34.334426   19040 cri.go:89] found id: "2a9c63a0cc63048424bec8ba57f42b8cb9041975eefe514e6cb37a648b93960d"
	I0307 21:50:34.334429   19040 cri.go:89] found id: "79369d5a315db50e2a00e1987fc2f9372358f87dcf1ff0177ac1f3a1d9f09159"
	I0307 21:50:34.334433   19040 cri.go:89] found id: "9733620690c83e7bf34ee884f9c825bad94e72799a184a757612dc15d19b71b1"
	I0307 21:50:34.334437   19040 cri.go:89] found id: "122c40b47440e522ae9cdd9b8a1a45cab186bbcd20d893bb0dbe24c36c35a9a2"
	I0307 21:50:34.334440   19040 cri.go:89] found id: "971300809d6dc1c69864b575af34a12e82c177822af19738bd95669b6ddf21d1"
	I0307 21:50:34.334443   19040 cri.go:89] found id: "c93588c0a8cce89a2f9d23a8ebdffd7238817b290581737e0684d75181f2a45c"
	I0307 21:50:34.334449   19040 cri.go:89] found id: "a122979188eacd9de5ed8cadad536037c22516ea1fd49bb0f9b0981e82e62f82"
	I0307 21:50:34.334453   19040 cri.go:89] found id: "920a86d849da2798ccea8cfd1b820820179d081b0c0572d8287ceb2be6fc9b6b"
	I0307 21:50:34.334456   19040 cri.go:89] found id: ""
	I0307 21:50:34.334505   19040 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0307 21:50:34.402976   19040 out.go:177] 
	W0307 21:50:34.404827   19040 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-07T21:50:34Z" level=error msg="stat /run/containerd/runc/k8s.io/fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-07T21:50:34Z" level=error msg="stat /run/containerd/runc/k8s.io/fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b: no such file or directory"
	
	W0307 21:50:34.404852   19040 out.go:239] * 
	* 
	W0307 21:50:34.858423   19040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 21:50:34.861686   19040 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:644: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-963512 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-963512
helpers_test.go:235: (dbg) docker inspect addons-963512:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5",
	        "Created": "2024-03-07T21:47:54.802815369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T21:47:55.161570378Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/hosts",
	        "LogPath": "/var/lib/docker/containers/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5/268fd7ac0849d50063e697283d8842148a0c8d046089c26eb4fdb401bc544cf5-json.log",
	        "Name": "/addons-963512",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-963512:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-963512",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832-init/diff:/var/lib/docker/overlay2/6822645c415ab3e3451f0dc6746bf9aea38c91b1070d7030c1ba88a1ef7f69e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/merged",
	                "UpperDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/diff",
	                "WorkDir": "/var/lib/docker/overlay2/874c8a531921a0cb274f3c9f29456a2df92e0721828de606d5cdced0be1ba832/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-963512",
	                "Source": "/var/lib/docker/volumes/addons-963512/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-963512",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-963512",
	                "name.minikube.sigs.k8s.io": "addons-963512",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3dd1803eafd80632ff2dcf736a6697f6dc853c0dd6401c9973bcb636be1dc5fa",
	            "SandboxKey": "/var/run/docker/netns/3dd1803eafd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-963512": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "268fd7ac0849",
	                        "addons-963512"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "8843454e1a2e9e62c7cd83baebf63228759ac1040db402b7b3da8ab031c3b9fb",
	                    "EndpointID": "46970578f819585d5a2ff047cfd1a0b4b9da6fd433a9be6f983a9cce33fb7164",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-963512",
	                        "268fd7ac0849"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-963512 -n addons-963512
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 logs -n 25: (1.601803337s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-526545              | download-only-526545   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | -o=json --download-only              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | -p download-only-336944              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-336944              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-150781              | download-only-150781   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-526545              | download-only-526545   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-336944              | download-only-336944   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | --download-only -p                   | download-docker-983577 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | download-docker-983577               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-983577            | download-docker-983577 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | --download-only -p                   | binary-mirror-880422   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | binary-mirror-880422                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39807               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-880422              | binary-mirror-880422   | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| addons  | enable dashboard -p                  | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| start   | -p addons-963512 --wait=true         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:49 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-963512 ip                     | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:49 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:49 UTC | 07 Mar 24 21:50 UTC |
	|         | addons-963512                        |                        |         |         |                     |                     |
	| ssh     | addons-963512 ssh curl -s            | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-963512 ip                     | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC | 07 Mar 24 21:50 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-963512 addons disable         | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC |                     |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-963512 addons                 | addons-963512          | jenkins | v1.32.0 | 07 Mar 24 21:50 UTC |                     |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 21:47:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 21:47:30.856221    8582 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:47:30.856384    8582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:30.856395    8582 out.go:304] Setting ErrFile to fd 2...
	I0307 21:47:30.856401    8582 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:30.856646    8582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:47:30.857070    8582 out.go:298] Setting JSON to false
	I0307 21:47:30.857781    8582 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1794,"bootTime":1709846257,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:47:30.857842    8582 start.go:139] virtualization:  
	I0307 21:47:30.860880    8582 out.go:177] * [addons-963512] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:47:30.863608    8582 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 21:47:30.865641    8582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:47:30.863795    8582 notify.go:220] Checking for updates...
	I0307 21:47:30.870296    8582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:47:30.872751    8582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:47:30.875270    8582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 21:47:30.877572    8582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 21:47:30.879830    8582 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:47:30.900582    8582 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:47:30.900702    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:30.972380    8582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:30.963315752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:30.972486    8582 docker.go:295] overlay module found
	I0307 21:47:30.974746    8582 out.go:177] * Using the docker driver based on user configuration
	I0307 21:47:30.976500    8582 start.go:297] selected driver: docker
	I0307 21:47:30.976516    8582 start.go:901] validating driver "docker" against <nil>
	I0307 21:47:30.976529    8582 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 21:47:30.977153    8582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:31.032311    8582 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:31.024064167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:31.032470    8582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 21:47:31.032695    8582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 21:47:31.035153    8582 out.go:177] * Using Docker driver with root privileges
	I0307 21:47:31.037434    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:47:31.037461    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:31.037474    8582 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 21:47:31.037555    8582 start.go:340] cluster config:
	{Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:47:31.039910    8582 out.go:177] * Starting "addons-963512" primary control-plane node in "addons-963512" cluster
	I0307 21:47:31.041912    8582 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 21:47:31.043839    8582 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 21:47:31.046058    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:31.046103    8582 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 21:47:31.046117    8582 cache.go:56] Caching tarball of preloaded images
	I0307 21:47:31.046149    8582 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 21:47:31.046204    8582 preload.go:173] Found /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 21:47:31.046214    8582 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0307 21:47:31.046581    8582 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json ...
	I0307 21:47:31.046612    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json: {Name:mkfaff1358e3290a9e5529ff48ed6fe910f98aae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:31.060479    8582 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:47:31.060590    8582 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 21:47:31.060618    8582 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 21:47:31.060627    8582 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 21:47:31.060635    8582 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 21:47:31.060648    8582 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0307 21:47:46.769630    8582 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0307 21:47:46.769662    8582 cache.go:194] Successfully downloaded all kic artifacts
	I0307 21:47:46.769690    8582 start.go:360] acquireMachinesLock for addons-963512: {Name:mkc22c72bf972f547a77fb9031585d63b88d0bcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 21:47:46.769804    8582 start.go:364] duration metric: took 86.925µs to acquireMachinesLock for "addons-963512"
	I0307 21:47:46.769828    8582 start.go:93] Provisioning new machine with config: &{Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 21:47:46.769900    8582 start.go:125] createHost starting for "" (driver="docker")
	I0307 21:47:46.772846    8582 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0307 21:47:46.773117    8582 start.go:159] libmachine.API.Create for "addons-963512" (driver="docker")
	I0307 21:47:46.773155    8582 client.go:168] LocalClient.Create starting
	I0307 21:47:46.773288    8582 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem
	I0307 21:47:46.986226    8582 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem
	I0307 21:47:48.041100    8582 cli_runner.go:164] Run: docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0307 21:47:48.056457    8582 cli_runner.go:211] docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0307 21:47:48.056546    8582 network_create.go:281] running [docker network inspect addons-963512] to gather additional debugging logs...
	I0307 21:47:48.056570    8582 cli_runner.go:164] Run: docker network inspect addons-963512
	W0307 21:47:48.072845    8582 cli_runner.go:211] docker network inspect addons-963512 returned with exit code 1
	I0307 21:47:48.072880    8582 network_create.go:284] error running [docker network inspect addons-963512]: docker network inspect addons-963512: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-963512 not found
	I0307 21:47:48.072912    8582 network_create.go:286] output of [docker network inspect addons-963512]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-963512 not found
	
	** /stderr **
	I0307 21:47:48.073022    8582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 21:47:48.090405    8582 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004bbfb0}
	I0307 21:47:48.090453    8582 network_create.go:124] attempt to create docker network addons-963512 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0307 21:47:48.090518    8582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-963512 addons-963512
	I0307 21:47:48.153355    8582 network_create.go:108] docker network addons-963512 192.168.49.0/24 created
	I0307 21:47:48.153387    8582 kic.go:121] calculated static IP "192.168.49.2" for the "addons-963512" container
	I0307 21:47:48.153458    8582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0307 21:47:48.167103    8582 cli_runner.go:164] Run: docker volume create addons-963512 --label name.minikube.sigs.k8s.io=addons-963512 --label created_by.minikube.sigs.k8s.io=true
	I0307 21:47:48.183033    8582 oci.go:103] Successfully created a docker volume addons-963512
	I0307 21:47:48.183124    8582 cli_runner.go:164] Run: docker run --rm --name addons-963512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --entrypoint /usr/bin/test -v addons-963512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0307 21:47:50.448514    8582 cli_runner.go:217] Completed: docker run --rm --name addons-963512-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --entrypoint /usr/bin/test -v addons-963512:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.265346309s)
	I0307 21:47:50.448544    8582 oci.go:107] Successfully prepared a docker volume addons-963512
	I0307 21:47:50.448568    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:50.448587    8582 kic.go:194] Starting extracting preloaded images to volume ...
	I0307 21:47:50.448678    8582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-963512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0307 21:47:54.732239    8582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-963512:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.283510113s)
	I0307 21:47:54.732297    8582 kic.go:203] duration metric: took 4.283706092s to extract preloaded images to volume ...
	W0307 21:47:54.732452    8582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0307 21:47:54.732606    8582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0307 21:47:54.786769    8582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-963512 --name addons-963512 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-963512 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-963512 --network addons-963512 --ip 192.168.49.2 --volume addons-963512:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0307 21:47:55.171086    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Running}}
	I0307 21:47:55.192903    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:55.220068    8582 cli_runner.go:164] Run: docker exec addons-963512 stat /var/lib/dpkg/alternatives/iptables
	I0307 21:47:55.288088    8582 oci.go:144] the created container "addons-963512" has a running status.
	I0307 21:47:55.288170    8582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa...
	I0307 21:47:56.086307    8582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0307 21:47:56.116585    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:56.141605    8582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0307 21:47:56.141624    8582 kic_runner.go:114] Args: [docker exec --privileged addons-963512 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0307 21:47:56.213157    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:47:56.231609    8582 machine.go:94] provisionDockerMachine start ...
	I0307 21:47:56.231697    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.249799    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.250177    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.250190    8582 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 21:47:56.383743    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-963512
	
	I0307 21:47:56.383767    8582 ubuntu.go:169] provisioning hostname "addons-963512"
	I0307 21:47:56.383828    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.402522    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.402776    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.402793    8582 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-963512 && echo "addons-963512" | sudo tee /etc/hostname
	I0307 21:47:56.544825    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-963512
	
	I0307 21:47:56.544922    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:56.562226    8582 main.go:141] libmachine: Using SSH client type: native
	I0307 21:47:56.562473    8582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0307 21:47:56.562488    8582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-963512' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-963512/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-963512' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 21:47:56.693258    8582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 21:47:56.693280    8582 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18320-2408/.minikube CaCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18320-2408/.minikube}
	I0307 21:47:56.693311    8582 ubuntu.go:177] setting up certificates
	I0307 21:47:56.693321    8582 provision.go:84] configureAuth start
	I0307 21:47:56.693406    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:56.713041    8582 provision.go:143] copyHostCerts
	I0307 21:47:56.713125    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem (1078 bytes)
	I0307 21:47:56.713275    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem (1123 bytes)
	I0307 21:47:56.713365    8582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem (1675 bytes)
	I0307 21:47:56.713427    8582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem org=jenkins.addons-963512 san=[127.0.0.1 192.168.49.2 addons-963512 localhost minikube]
	I0307 21:47:57.383426    8582 provision.go:177] copyRemoteCerts
	I0307 21:47:57.383493    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 21:47:57.383537    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.399023    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.492861    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 21:47:57.517269    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0307 21:47:57.542230    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0307 21:47:57.565875    8582 provision.go:87] duration metric: took 872.541484ms to configureAuth
	I0307 21:47:57.565944    8582 ubuntu.go:193] setting minikube options for container-runtime
	I0307 21:47:57.566165    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:47:57.566179    8582 machine.go:97] duration metric: took 1.334551415s to provisionDockerMachine
	I0307 21:47:57.566192    8582 client.go:171] duration metric: took 10.793022211s to LocalClient.Create
	I0307 21:47:57.566210    8582 start.go:167] duration metric: took 10.793094179s to libmachine.API.Create "addons-963512"
	I0307 21:47:57.566224    8582 start.go:293] postStartSetup for "addons-963512" (driver="docker")
	I0307 21:47:57.566234    8582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 21:47:57.566288    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 21:47:57.566337    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.581586    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.673193    8582 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 21:47:57.676294    8582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 21:47:57.676407    8582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 21:47:57.676428    8582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 21:47:57.676436    8582 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 21:47:57.676445    8582 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/addons for local assets ...
	I0307 21:47:57.676515    8582 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/files for local assets ...
	I0307 21:47:57.676542    8582 start.go:296] duration metric: took 110.311242ms for postStartSetup
	I0307 21:47:57.676854    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:57.693212    8582 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/config.json ...
	I0307 21:47:57.693487    8582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 21:47:57.693537    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.711632    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.801217    8582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 21:47:57.805778    8582 start.go:128] duration metric: took 11.035863034s to createHost
	I0307 21:47:57.805803    8582 start.go:83] releasing machines lock for "addons-963512", held for 11.035990772s
	I0307 21:47:57.805902    8582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-963512
	I0307 21:47:57.822355    8582 ssh_runner.go:195] Run: cat /version.json
	I0307 21:47:57.822401    8582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 21:47:57.822555    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.822407    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:47:57.848947    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:57.852026    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:47:58.063399    8582 ssh_runner.go:195] Run: systemctl --version
	I0307 21:47:58.067742    8582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 21:47:58.071923    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 21:47:58.096697    8582 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 21:47:58.096771    8582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 21:47:58.126428    8582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0307 21:47:58.126494    8582 start.go:494] detecting cgroup driver to use...
	I0307 21:47:58.126551    8582 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 21:47:58.126629    8582 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 21:47:58.139503    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 21:47:58.150850    8582 docker.go:217] disabling cri-docker service (if available) ...
	I0307 21:47:58.150941    8582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 21:47:58.164918    8582 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 21:47:58.179810    8582 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 21:47:58.259103    8582 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 21:47:58.352417    8582 docker.go:233] disabling docker service ...
	I0307 21:47:58.352499    8582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 21:47:58.372393    8582 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 21:47:58.384514    8582 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 21:47:58.464789    8582 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 21:47:58.552249    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 21:47:58.563803    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 21:47:58.581715    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 21:47:58.591987    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 21:47:58.601874    8582 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 21:47:58.602000    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 21:47:58.611674    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 21:47:58.621410    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 21:47:58.630875    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 21:47:58.640641    8582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 21:47:58.649953    8582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 21:47:58.659390    8582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 21:47:58.667982    8582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 21:47:58.676095    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:47:58.761491    8582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 21:47:58.892296    8582 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 21:47:58.892375    8582 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 21:47:58.895807    8582 start.go:562] Will wait 60s for crictl version
	I0307 21:47:58.895876    8582 ssh_runner.go:195] Run: which crictl
	I0307 21:47:58.899177    8582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 21:47:58.945787    8582 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 21:47:58.945893    8582 ssh_runner.go:195] Run: containerd --version
	I0307 21:47:58.966474    8582 ssh_runner.go:195] Run: containerd --version
	I0307 21:47:58.991151    8582 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0307 21:47:58.993564    8582 cli_runner.go:164] Run: docker network inspect addons-963512 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 21:47:59.009426    8582 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0307 21:47:59.013109    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 21:47:59.023712    8582 kubeadm.go:877] updating cluster {Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 21:47:59.023842    8582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:59.023911    8582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 21:47:59.066001    8582 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 21:47:59.066025    8582 containerd.go:519] Images already preloaded, skipping extraction
	I0307 21:47:59.066118    8582 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 21:47:59.102076    8582 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 21:47:59.102098    8582 cache_images.go:84] Images are preloaded, skipping loading
	I0307 21:47:59.102107    8582 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0307 21:47:59.102214    8582 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-963512 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 21:47:59.102318    8582 ssh_runner.go:195] Run: sudo crictl info
	I0307 21:47:59.138900    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:47:59.138921    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:59.138931    8582 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 21:47:59.138953    8582 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-963512 NodeName:addons-963512 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 21:47:59.139092    8582 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-963512"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 21:47:59.139179    8582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0307 21:47:59.148326    8582 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 21:47:59.148439    8582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 21:47:59.157065    8582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0307 21:47:59.174717    8582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 21:47:59.194886    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0307 21:47:59.213118    8582 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0307 21:47:59.216335    8582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 21:47:59.226878    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:47:59.303895    8582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 21:47:59.319283    8582 certs.go:68] Setting up /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512 for IP: 192.168.49.2
	I0307 21:47:59.319346    8582 certs.go:194] generating shared ca certs ...
	I0307 21:47:59.319376    8582 certs.go:226] acquiring lock for ca certs: {Name:mk7f303c61c8508a802bee4114a394243ccd109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.319550    8582 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key
	I0307 21:47:59.607025    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt ...
	I0307 21:47:59.607059    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt: {Name:mkfbbe6943bc19d717b500158cbceb169ba4756a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.607252    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key ...
	I0307 21:47:59.607265    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key: {Name:mkbb3672b232f77a77f948f9cc6992fc9b82b64b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.607352    8582 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key
	I0307 21:47:59.947161    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt ...
	I0307 21:47:59.947213    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt: {Name:mk9aa750b39dfb459a036c45219993e3675189b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.947448    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key ...
	I0307 21:47:59.947465    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key: {Name:mk7da6cab51d06694701335e0cd5746ee1c580c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:59.947581    8582 certs.go:256] generating profile certs ...
	I0307 21:47:59.947654    8582 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key
	I0307 21:47:59.947671    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt with IP's: []
	I0307 21:48:00.428445    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt ...
	I0307 21:48:00.428532    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: {Name:mk7395f79e624ececb255077e9cbeba412d3c048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:00.428801    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key ...
	I0307 21:48:00.428842    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.key: {Name:mk1d5c95cb3c70b545bc143fd89b2c9b6bd00cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:00.428987    8582 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529
	I0307 21:48:00.429036    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0307 21:48:01.189312    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 ...
	I0307 21:48:01.189350    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529: {Name:mk0b95b44ed83c74e8cc54fd259198587f90b661 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.189561    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529 ...
	I0307 21:48:01.189577    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529: {Name:mk68d2c56784a39a9401fa891f59e1e221f2a63a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.189675    8582 certs.go:381] copying /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt.00f2f529 -> /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt
	I0307 21:48:01.189771    8582 certs.go:385] copying /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key.00f2f529 -> /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key
	I0307 21:48:01.189830    8582 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key
	I0307 21:48:01.189854    8582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt with IP's: []
	I0307 21:48:01.724770    8582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt ...
	I0307 21:48:01.724801    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt: {Name:mke9397b127f14849a7627474f57d44162a7e32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.724976    8582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key ...
	I0307 21:48:01.724990    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key: {Name:mk78eda69ae2c05925b3f3fed3a9ccf0d9de591c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:01.725182    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 21:48:01.725223    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem (1078 bytes)
	I0307 21:48:01.725251    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem (1123 bytes)
	I0307 21:48:01.725286    8582 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem (1675 bytes)
	I0307 21:48:01.725857    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 21:48:01.752052    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 21:48:01.775244    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 21:48:01.800248    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 21:48:01.823983    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0307 21:48:01.848750    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0307 21:48:01.872115    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 21:48:01.895693    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 21:48:01.919389    8582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 21:48:01.946379    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 21:48:01.965504    8582 ssh_runner.go:195] Run: openssl version
	I0307 21:48:01.971285    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 21:48:01.981327    8582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.985233    8582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.985339    8582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 21:48:01.992664    8582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 21:48:02.003677    8582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 21:48:02.009691    8582 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0307 21:48:02.009781    8582 kubeadm.go:391] StartCluster: {Name:addons-963512 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-963512 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:48:02.009864    8582 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 21:48:02.009927    8582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 21:48:02.053741    8582 cri.go:89] found id: ""
	I0307 21:48:02.053812    8582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0307 21:48:02.062557    8582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0307 21:48:02.071478    8582 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0307 21:48:02.071563    8582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0307 21:48:02.080328    8582 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0307 21:48:02.080347    8582 kubeadm.go:156] found existing configuration files:
	
	I0307 21:48:02.080419    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0307 21:48:02.090199    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0307 21:48:02.090271    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0307 21:48:02.098751    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0307 21:48:02.107397    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0307 21:48:02.107507    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0307 21:48:02.115698    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0307 21:48:02.124355    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0307 21:48:02.124435    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0307 21:48:02.133228    8582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0307 21:48:02.141847    8582 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0307 21:48:02.141907    8582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0307 21:48:02.150293    8582 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0307 21:48:02.197355    8582 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0307 21:48:02.197437    8582 kubeadm.go:309] [preflight] Running pre-flight checks
	I0307 21:48:02.238490    8582 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0307 21:48:02.238580    8582 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0307 21:48:02.238634    8582 kubeadm.go:309] OS: Linux
	I0307 21:48:02.238693    8582 kubeadm.go:309] CGROUPS_CPU: enabled
	I0307 21:48:02.238758    8582 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0307 21:48:02.238819    8582 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0307 21:48:02.238875    8582 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0307 21:48:02.238936    8582 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0307 21:48:02.238997    8582 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0307 21:48:02.239054    8582 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0307 21:48:02.239114    8582 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0307 21:48:02.239172    8582 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0307 21:48:02.317636    8582 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0307 21:48:02.317839    8582 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0307 21:48:02.317953    8582 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0307 21:48:02.532668    8582 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0307 21:48:02.535366    8582 out.go:204]   - Generating certificates and keys ...
	I0307 21:48:02.535537    8582 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0307 21:48:02.535621    8582 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0307 21:48:03.119922    8582 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0307 21:48:03.901933    8582 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0307 21:48:04.516506    8582 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0307 21:48:04.623527    8582 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0307 21:48:05.178652    8582 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0307 21:48:05.178871    8582 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-963512 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 21:48:05.577145    8582 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0307 21:48:05.577502    8582 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-963512 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0307 21:48:05.882745    8582 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0307 21:48:06.079949    8582 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0307 21:48:06.373079    8582 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0307 21:48:06.373503    8582 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0307 21:48:06.497702    8582 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0307 21:48:06.889410    8582 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0307 21:48:07.498623    8582 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0307 21:48:07.968484    8582 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0307 21:48:07.969574    8582 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0307 21:48:07.972670    8582 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0307 21:48:07.975281    8582 out.go:204]   - Booting up control plane ...
	I0307 21:48:07.975406    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0307 21:48:07.976830    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0307 21:48:07.978330    8582 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0307 21:48:07.993081    8582 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0307 21:48:07.993690    8582 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0307 21:48:07.993765    8582 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0307 21:48:08.090991    8582 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0307 21:48:15.093676    8582 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.002736 seconds
	I0307 21:48:15.093796    8582 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0307 21:48:15.109385    8582 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0307 21:48:15.637033    8582 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0307 21:48:15.637227    8582 kubeadm.go:309] [mark-control-plane] Marking the node addons-963512 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0307 21:48:16.148820    8582 kubeadm.go:309] [bootstrap-token] Using token: eitw6c.jj5bymtx8a9epad7
	I0307 21:48:16.151351    8582 out.go:204]   - Configuring RBAC rules ...
	I0307 21:48:16.151485    8582 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0307 21:48:16.160130    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0307 21:48:16.170199    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0307 21:48:16.173972    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0307 21:48:16.178568    8582 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0307 21:48:16.182025    8582 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0307 21:48:16.196298    8582 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0307 21:48:16.421411    8582 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0307 21:48:16.565628    8582 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0307 21:48:16.566647    8582 kubeadm.go:309] 
	I0307 21:48:16.566716    8582 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0307 21:48:16.566722    8582 kubeadm.go:309] 
	I0307 21:48:16.566796    8582 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0307 21:48:16.566801    8582 kubeadm.go:309] 
	I0307 21:48:16.566825    8582 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0307 21:48:16.566881    8582 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0307 21:48:16.566930    8582 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0307 21:48:16.566934    8582 kubeadm.go:309] 
	I0307 21:48:16.566985    8582 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0307 21:48:16.566993    8582 kubeadm.go:309] 
	I0307 21:48:16.567038    8582 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0307 21:48:16.567042    8582 kubeadm.go:309] 
	I0307 21:48:16.567092    8582 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0307 21:48:16.567163    8582 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0307 21:48:16.567232    8582 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0307 21:48:16.567236    8582 kubeadm.go:309] 
	I0307 21:48:16.567316    8582 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0307 21:48:16.567389    8582 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0307 21:48:16.567394    8582 kubeadm.go:309] 
	I0307 21:48:16.567473    8582 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token eitw6c.jj5bymtx8a9epad7 \
	I0307 21:48:16.567572    8582 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:815daef26ee193d5c7b84bb14049831ce64ba1c53ef7a1083d48ead9a06b7cce \
	I0307 21:48:16.567592    8582 kubeadm.go:309] 	--control-plane 
	I0307 21:48:16.567596    8582 kubeadm.go:309] 
	I0307 21:48:16.567677    8582 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0307 21:48:16.567681    8582 kubeadm.go:309] 
	I0307 21:48:16.567759    8582 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token eitw6c.jj5bymtx8a9epad7 \
	I0307 21:48:16.568076    8582 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:815daef26ee193d5c7b84bb14049831ce64ba1c53ef7a1083d48ead9a06b7cce 
	I0307 21:48:16.571520    8582 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0307 21:48:16.571632    8582 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0307 21:48:16.571649    8582 cni.go:84] Creating CNI manager for ""
	I0307 21:48:16.571656    8582 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:48:16.574785    8582 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0307 21:48:16.576842    8582 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0307 21:48:16.581232    8582 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0307 21:48:16.581253    8582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0307 21:48:16.605467    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0307 21:48:17.548047    8582 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0307 21:48:17.548179    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:17.548297    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-963512 minikube.k8s.io/updated_at=2024_03_07T21_48_17_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6 minikube.k8s.io/name=addons-963512 minikube.k8s.io/primary=true
	I0307 21:48:17.759664    8582 ops.go:34] apiserver oom_adj: -16
	I0307 21:48:17.759787    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:18.259967    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:18.759924    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:19.260417    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:19.760673    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:20.260751    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:20.760445    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:21.260832    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:21.760608    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:22.260777    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:22.760001    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:23.260400    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:23.759941    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:24.260847    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:24.760767    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:25.259910    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:25.760598    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:26.260736    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:26.760744    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:27.259909    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:27.760625    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:28.260497    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:28.759898    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:29.259983    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:29.760642    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:30.260724    8582 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0307 21:48:30.389998    8582 kubeadm.go:1106] duration metric: took 12.841869041s to wait for elevateKubeSystemPrivileges
	W0307 21:48:30.390035    8582 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0307 21:48:30.390043    8582 kubeadm.go:393] duration metric: took 28.380266479s to StartCluster
	I0307 21:48:30.390058    8582 settings.go:142] acquiring lock: {Name:mk6b824c86d3c8cffe443e44d2dcdf6ba75674f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:30.390172    8582 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:48:30.390611    8582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:48:30.390816    8582 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 21:48:30.393517    8582 out.go:177] * Verifying Kubernetes components...
	I0307 21:48:30.390953    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0307 21:48:30.391133    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:48:30.391143    8582 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0307 21:48:30.395637    8582 addons.go:69] Setting ingress=true in profile "addons-963512"
	I0307 21:48:30.395649    8582 addons.go:69] Setting ingress-dns=true in profile "addons-963512"
	I0307 21:48:30.395665    8582 addons.go:234] Setting addon ingress-dns=true in "addons-963512"
	I0307 21:48:30.395668    8582 addons.go:234] Setting addon ingress=true in "addons-963512"
	I0307 21:48:30.395701    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.395713    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.396172    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.396199    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.396810    8582 addons.go:69] Setting inspektor-gadget=true in profile "addons-963512"
	I0307 21:48:30.396836    8582 addons.go:234] Setting addon inspektor-gadget=true in "addons-963512"
	I0307 21:48:30.396873    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.397268    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.397444    8582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 21:48:30.397678    8582 addons.go:69] Setting cloud-spanner=true in profile "addons-963512"
	I0307 21:48:30.397700    8582 addons.go:234] Setting addon cloud-spanner=true in "addons-963512"
	I0307 21:48:30.397760    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.398157    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.403321    8582 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-963512"
	I0307 21:48:30.403414    8582 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-963512"
	I0307 21:48:30.403467    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.403954    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.409620    8582 addons.go:69] Setting metrics-server=true in profile "addons-963512"
	I0307 21:48:30.409668    8582 addons.go:234] Setting addon metrics-server=true in "addons-963512"
	I0307 21:48:30.409701    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.410287    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.415450    8582 addons.go:69] Setting default-storageclass=true in profile "addons-963512"
	I0307 21:48:30.415505    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-963512"
	I0307 21:48:30.415805    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.415939    8582 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-963512"
	I0307 21:48:30.415965    8582 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-963512"
	I0307 21:48:30.416005    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.416448    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.435085    8582 addons.go:69] Setting gcp-auth=true in profile "addons-963512"
	I0307 21:48:30.436546    8582 addons.go:69] Setting registry=true in profile "addons-963512"
	I0307 21:48:30.436580    8582 addons.go:234] Setting addon registry=true in "addons-963512"
	I0307 21:48:30.436617    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.437058    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.439110    8582 mustload.go:65] Loading cluster: addons-963512
	I0307 21:48:30.439372    8582 config.go:182] Loaded profile config "addons-963512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:48:30.439706    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.395642    8582 addons.go:69] Setting yakd=true in profile "addons-963512"
	I0307 21:48:30.458781    8582 addons.go:234] Setting addon yakd=true in "addons-963512"
	I0307 21:48:30.458826    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.459274    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.478090    8582 addons.go:69] Setting storage-provisioner=true in profile "addons-963512"
	I0307 21:48:30.478174    8582 addons.go:234] Setting addon storage-provisioner=true in "addons-963512"
	I0307 21:48:30.478233    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.488862    8582 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0307 21:48:30.488443    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.480360    8582 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-963512"
	I0307 21:48:30.480374    8582 addons.go:69] Setting volumesnapshots=true in profile "addons-963512"
	I0307 21:48:30.493105    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 21:48:30.521747    8582 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.25.1
	I0307 21:48:30.521821    8582 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-963512"
	I0307 21:48:30.521914    8582 addons.go:234] Setting addon volumesnapshots=true in "addons-963512"
	I0307 21:48:30.524762    8582 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0307 21:48:30.524843    8582 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0307 21:48:30.524929    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 21:48:30.530545    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:30.545031    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.545095    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.545224    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.550189    8582 addons.go:234] Setting addon default-storageclass=true in "addons-963512"
	I0307 21:48:30.550314    8582 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0307 21:48:30.550355    8582 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0307 21:48:30.556813    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.557369    8582 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0307 21:48:30.557420    8582 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 21:48:30.565154    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0307 21:48:30.565222    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.567461    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0307 21:48:30.580584    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0307 21:48:30.584420    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0307 21:48:30.566237    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.567437    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0307 21:48:30.580517    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0307 21:48:30.590551    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.594826    8582 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 21:48:30.594846    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0307 21:48:30.594911    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.648533    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:30.650864    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0307 21:48:30.653252    8582 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 21:48:30.653271    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0307 21:48:30.653337    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.580498    8582 out.go:177]   - Using image docker.io/registry:2.8.3
	I0307 21:48:30.662369    8582 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0307 21:48:30.665060    8582 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0307 21:48:30.665082    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0307 21:48:30.665137    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.686763    8582 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 21:48:30.662610    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.663204    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.663234    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.701235    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.701895    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0307 21:48:30.721860    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0307 21:48:30.723919    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0307 21:48:30.702223    8582 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 21:48:30.724984    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.728196    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0307 21:48:30.736495    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0307 21:48:30.738697    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0307 21:48:30.738718    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0307 21:48:30.738781    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.749385    8582 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-963512"
	I0307 21:48:30.749429    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:30.749834    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:30.762586    8582 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0307 21:48:30.764580    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0307 21:48:30.764603    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0307 21:48:30.764668    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.796245    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 21:48:30.796425    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.804993    8582 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0307 21:48:30.810783    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0307 21:48:30.810815    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0307 21:48:30.810883    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.805208    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.848253    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.863382    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.893093    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.911546    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.920753    8582 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 21:48:30.920776    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 21:48:30.920832    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.956161    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.967160    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:30.970777    8582 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0307 21:48:30.972246    8582 out.go:177]   - Using image docker.io/busybox:stable
	I0307 21:48:30.976682    8582 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 21:48:30.976698    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0307 21:48:30.976761    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:30.999969    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.000870    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.047165    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.051786    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:31.135551    8582 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0307 21:48:31.135672    8582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 21:48:31.191037    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 21:48:31.191064    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0307 21:48:31.196430    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0307 21:48:31.248452    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0307 21:48:31.288364    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0307 21:48:31.302327    8582 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0307 21:48:31.302360    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0307 21:48:31.306458    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0307 21:48:31.329220    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 21:48:31.372074    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 21:48:31.372099    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 21:48:31.392882    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0307 21:48:31.392924    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0307 21:48:31.418772    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0307 21:48:31.418807    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0307 21:48:31.454099    8582 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0307 21:48:31.454124    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0307 21:48:31.457205    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0307 21:48:31.462937    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 21:48:31.481105    8582 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0307 21:48:31.481128    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0307 21:48:31.490215    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0307 21:48:31.490240    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0307 21:48:31.601386    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0307 21:48:31.601413    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0307 21:48:31.615768    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0307 21:48:31.615792    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0307 21:48:31.660458    8582 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 21:48:31.660482    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 21:48:31.663742    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0307 21:48:31.669477    8582 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0307 21:48:31.669547    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0307 21:48:31.721459    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0307 21:48:31.721484    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0307 21:48:31.747382    8582 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0307 21:48:31.747415    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0307 21:48:31.755688    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0307 21:48:31.755723    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0307 21:48:31.821194    8582 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0307 21:48:31.821227    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0307 21:48:31.850378    8582 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0307 21:48:31.850403    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0307 21:48:31.898520    8582 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0307 21:48:31.898547    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0307 21:48:31.900326    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0307 21:48:31.900351    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0307 21:48:31.989085    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 21:48:31.991773    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0307 21:48:31.991812    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0307 21:48:32.024433    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0307 21:48:32.135477    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0307 21:48:32.135504    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0307 21:48:32.143697    8582 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0307 21:48:32.143722    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0307 21:48:32.200395    8582 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:32.200419    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0307 21:48:32.332848    8582 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0307 21:48:32.332877    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0307 21:48:32.481305    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:32.488964    8582 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0307 21:48:32.488997    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0307 21:48:32.734287    8582 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 21:48:32.734317    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0307 21:48:32.882788    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0307 21:48:32.935449    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0307 21:48:32.935481    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0307 21:48:33.119058    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0307 21:48:33.119088    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0307 21:48:33.344726    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0307 21:48:33.344758    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0307 21:48:33.699597    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0307 21:48:33.699628    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0307 21:48:34.221524    8582 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 21:48:34.221568    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0307 21:48:34.461681    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0307 21:48:34.546989    8582 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.411289824s)
	I0307 21:48:34.547129    8582 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.411549162s)
	I0307 21:48:34.547146    8582 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0307 21:48:34.548688    8582 node_ready.go:35] waiting up to 6m0s for node "addons-963512" to be "Ready" ...
	I0307 21:48:34.552488    8582 node_ready.go:49] node "addons-963512" has status "Ready":"True"
	I0307 21:48:34.552509    8582 node_ready.go:38] duration metric: took 3.797489ms for node "addons-963512" to be "Ready" ...
	I0307 21:48:34.552518    8582 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 21:48:34.563281    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:34.931049    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.734582053s)
	I0307 21:48:35.052183    8582 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-963512" context rescaled to 1 replicas
	I0307 21:48:35.440029    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.191541003s)
	I0307 21:48:35.440140    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.151752665s)
	I0307 21:48:36.570340    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:37.514000    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0307 21:48:37.514084    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:37.535542    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:37.871120    8582 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0307 21:48:37.896293    8582 addons.go:234] Setting addon gcp-auth=true in "addons-963512"
	I0307 21:48:37.896343    8582 host.go:66] Checking if "addons-963512" exists ...
	I0307 21:48:37.896859    8582 cli_runner.go:164] Run: docker container inspect addons-963512 --format={{.State.Status}}
	I0307 21:48:37.928132    8582 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0307 21:48:37.928190    8582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-963512
	I0307 21:48:37.959514    8582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/addons-963512/id_rsa Username:docker}
	I0307 21:48:38.573524    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:39.386345    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.057084393s)
	I0307 21:48:39.386424    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.929196437s)
	I0307 21:48:39.386495    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.080002504s)
	I0307 21:48:39.386522    8582 addons.go:470] Verifying addon ingress=true in "addons-963512"
	I0307 21:48:39.388892    8582 out.go:177] * Verifying ingress addon...
	I0307 21:48:39.386789    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.923829453s)
	I0307 21:48:39.391600    8582 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0307 21:48:39.386893    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.723120499s)
	I0307 21:48:39.391813    8582 addons.go:470] Verifying addon registry=true in "addons-963512"
	I0307 21:48:39.386999    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.362532385s)
	I0307 21:48:39.387084    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.905746539s)
	I0307 21:48:39.387138    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.504321607s)
	I0307 21:48:39.386944    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.397833342s)
	I0307 21:48:39.393744    8582 addons.go:470] Verifying addon metrics-server=true in "addons-963512"
	I0307 21:48:39.393768    8582 out.go:177] * Verifying registry addon...
	I0307 21:48:39.396857    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0307 21:48:39.393872    8582 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 21:48:39.397066    8582 retry.go:31] will retry after 370.790466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0307 21:48:39.399364    8582 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-963512 service yakd-dashboard -n yakd-dashboard
	
	I0307 21:48:39.401493    8582 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0307 21:48:39.403709    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:39.409106    8582 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0307 21:48:39.409135    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0307 21:48:39.427071    8582 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0307 21:48:39.768356    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0307 21:48:39.896907    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:39.902127    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:40.483332    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:40.493009    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:40.572772    8582 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.644606371s)
	I0307 21:48:40.574636    8582 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0307 21:48:40.572900    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.111016973s)
	I0307 21:48:40.577296    8582 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-963512"
	I0307 21:48:40.582047    8582 out.go:177] * Verifying csi-hostpath-driver addon...
	I0307 21:48:40.584527    8582 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0307 21:48:40.594825    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0307 21:48:40.594856    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0307 21:48:40.585376    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0307 21:48:40.635152    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:40.636580    8582 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0307 21:48:40.636601    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:40.682833    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0307 21:48:40.682866    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0307 21:48:40.743342    8582 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 21:48:40.743366    8582 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0307 21:48:40.800851    8582 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0307 21:48:40.901328    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:40.909439    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:41.102010    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:41.396876    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:41.402359    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:41.601375    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:41.923616    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:41.928365    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:42.148319    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:42.316116    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.515204539s)
	I0307 21:48:42.316266    8582 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.54787463s)
	I0307 21:48:42.319786    8582 addons.go:470] Verifying addon gcp-auth=true in "addons-963512"
	I0307 21:48:42.323523    8582 out.go:177] * Verifying gcp-auth addon...
	I0307 21:48:42.326486    8582 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0307 21:48:42.330283    8582 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0307 21:48:42.330324    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:42.400157    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:42.405315    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:42.600884    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:42.830866    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:42.895948    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:42.901166    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:43.069916    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:43.101161    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:43.331072    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:43.396400    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:43.402156    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:43.601017    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:43.830885    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:43.896924    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:43.901083    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:44.101418    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:44.348979    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:44.396196    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:44.401923    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:44.601034    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:44.837997    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:44.896486    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:44.903659    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:45.073070    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:45.104059    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:45.332005    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:45.397995    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:45.401702    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:45.601485    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:45.830389    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:45.896664    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:45.903197    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:46.102404    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:46.332649    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:46.396231    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:46.401849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:46.601080    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:46.830202    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:46.896510    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:46.901174    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:47.076645    8582 pod_ready.go:102] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:47.101314    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:47.330807    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:47.398037    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:47.408496    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:47.600776    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:47.833217    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:47.926092    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:47.952620    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:48.070676    8582 pod_ready.go:92] pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.070712    8582 pod_ready.go:81] duration metric: took 13.50738532s for pod "coredns-5dd5756b68-29fkp" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.070726    8582 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.073196    8582 pod_ready.go:97] error getting pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5mrbg" not found
	I0307 21:48:48.073227    8582 pod_ready.go:81] duration metric: took 2.493175ms for pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace to be "Ready" ...
	E0307 21:48:48.073240    8582 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-5mrbg" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-5mrbg" not found
	I0307 21:48:48.073248    8582 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.080393    8582 pod_ready.go:92] pod "etcd-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.080420    8582 pod_ready.go:81] duration metric: took 7.154028ms for pod "etcd-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.080445    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.087501    8582 pod_ready.go:92] pod "kube-apiserver-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.087527    8582 pod_ready.go:81] duration metric: took 7.074266ms for pod "kube-apiserver-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.087550    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.095350    8582 pod_ready.go:92] pod "kube-controller-manager-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.095375    8582 pod_ready.go:81] duration metric: took 7.815982ms for pod "kube-controller-manager-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.095388    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w6gxd" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.104106    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:48.269007    8582 pod_ready.go:92] pod "kube-proxy-w6gxd" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.269043    8582 pod_ready.go:81] duration metric: took 173.64734ms for pod "kube-proxy-w6gxd" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.269060    8582 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.330555    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:48.396609    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:48.402416    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0307 21:48:48.602197    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:48.667167    8582 pod_ready.go:92] pod "kube-scheduler-addons-963512" in "kube-system" namespace has status "Ready":"True"
	I0307 21:48:48.667194    8582 pod_ready.go:81] duration metric: took 398.125604ms for pod "kube-scheduler-addons-963512" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.667207    8582 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace to be "Ready" ...
	I0307 21:48:48.830978    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:48.896699    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:48.902063    8582 kapi.go:107] duration metric: took 9.505201603s to wait for kubernetes.io/minikube-addons=registry ...
	I0307 21:48:49.101100    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:49.330309    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:49.396695    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:49.600901    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:49.830739    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:49.897976    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:50.101815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:50.331264    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:50.396803    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:50.600547    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:50.674209    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:50.834067    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:50.896820    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:51.101424    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:51.331314    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:51.397163    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:51.601217    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:51.830742    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:51.897498    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:52.101659    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:52.330932    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:52.396547    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:52.601178    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:52.830645    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:52.905502    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:53.100995    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:53.173655    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:53.330754    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:53.396473    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:53.600837    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:53.830123    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:53.897378    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:54.102737    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:54.332054    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:54.401279    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:54.601015    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:54.830967    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:54.896467    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:55.102651    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:55.175042    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:55.331036    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:55.396791    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:55.600719    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:55.830408    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:55.896081    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:56.100791    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:56.330299    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:56.397162    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:56.600557    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:56.830604    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:56.899747    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:57.101407    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:57.329764    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:57.395828    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:57.602554    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:57.678994    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:48:57.844672    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:57.896816    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:58.104640    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:58.335382    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:58.396749    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:58.600712    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:58.830815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:58.896705    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:59.101591    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:59.331180    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:59.396675    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:48:59.605219    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:48:59.830611    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:48:59.896122    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:00.193413    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:00.201499    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:00.358309    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:00.397929    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:00.602511    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:00.831453    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:00.896085    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:01.101624    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:01.330525    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:01.396163    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:01.601630    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:01.831098    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:01.895967    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:02.101434    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:02.330804    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:02.397742    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:02.601765    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:02.676495    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:02.831701    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:02.898351    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:03.101861    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:03.332995    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:03.396014    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:03.602665    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:03.831156    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:03.896768    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:04.102004    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:04.331593    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:04.398215    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:04.600601    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:04.830297    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:04.896360    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:05.104204    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:05.182100    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:05.330570    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:05.396168    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:05.601490    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:05.830347    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:05.896477    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:06.101903    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:06.330571    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:06.396059    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:06.600599    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:06.830167    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:06.896850    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:07.111928    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:07.334848    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:07.396519    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:07.601440    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:07.673790    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:07.830045    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:07.896111    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:08.102892    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:08.330179    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:08.396035    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:08.601175    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:08.830469    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:08.903761    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:09.101511    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:09.330673    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:09.397092    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:09.601173    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:09.831384    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:09.898064    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:10.101041    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:10.175266    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:10.331101    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:10.395858    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:10.600129    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:10.831052    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:10.896190    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:11.102094    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:11.331464    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:11.396670    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:11.601137    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:11.830479    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:11.897054    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:12.101178    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:12.178607    8582 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"False"
	I0307 21:49:12.332421    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:12.402977    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:12.601568    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:12.830472    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:12.897365    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:13.100904    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:13.331149    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:13.396523    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:13.602170    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:13.830240    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:13.897246    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:14.101054    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:14.330031    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:14.395936    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:14.600439    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:14.673705    8582 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace has status "Ready":"True"
	I0307 21:49:14.673734    8582 pod_ready.go:81] duration metric: took 26.006519063s for pod "nvidia-device-plugin-daemonset-skr6t" in "kube-system" namespace to be "Ready" ...
	I0307 21:49:14.673745    8582 pod_ready.go:38] duration metric: took 40.121215117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 21:49:14.673759    8582 api_server.go:52] waiting for apiserver process to appear ...
	I0307 21:49:14.673817    8582 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 21:49:14.693989    8582 api_server.go:72] duration metric: took 44.303134752s to wait for apiserver process to appear ...
	I0307 21:49:14.694012    8582 api_server.go:88] waiting for apiserver healthz status ...
	I0307 21:49:14.694032    8582 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0307 21:49:14.702391    8582 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0307 21:49:14.703645    8582 api_server.go:141] control plane version: v1.28.4
	I0307 21:49:14.703667    8582 api_server.go:131] duration metric: took 9.648294ms to wait for apiserver health ...
	I0307 21:49:14.703689    8582 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 21:49:14.714899    8582 system_pods.go:59] 18 kube-system pods found
	I0307 21:49:14.714934    8582 system_pods.go:61] "coredns-5dd5756b68-29fkp" [f8bab481-83f1-4618-a41d-c1f4f52609ab] Running
	I0307 21:49:14.714942    8582 system_pods.go:61] "csi-hostpath-attacher-0" [ae25642c-4c09-4ba4-986a-fc0894df39a3] Running
	I0307 21:49:14.714946    8582 system_pods.go:61] "csi-hostpath-resizer-0" [4e28b525-6966-4a98-9c3d-b3bc7b156676] Running
	I0307 21:49:14.714954    8582 system_pods.go:61] "csi-hostpathplugin-7rl2z" [f5001b06-75f0-4446-b62f-e7de26581524] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 21:49:14.714960    8582 system_pods.go:61] "etcd-addons-963512" [6b9f7896-fb74-4fcf-ae8a-65f8a002118e] Running
	I0307 21:49:14.714965    8582 system_pods.go:61] "kindnet-ch46s" [92a51680-a4d1-4636-b15d-185301519096] Running
	I0307 21:49:14.714970    8582 system_pods.go:61] "kube-apiserver-addons-963512" [2618ff7b-95ab-4a4a-9343-652bbd3c76c7] Running
	I0307 21:49:14.714974    8582 system_pods.go:61] "kube-controller-manager-addons-963512" [5ca9551f-1d7e-4be1-ae8b-67c3d148f01f] Running
	I0307 21:49:14.714980    8582 system_pods.go:61] "kube-ingress-dns-minikube" [9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 21:49:14.714984    8582 system_pods.go:61] "kube-proxy-w6gxd" [7fead251-876a-464c-94cf-c12167cd82af] Running
	I0307 21:49:14.714989    8582 system_pods.go:61] "kube-scheduler-addons-963512" [6b4ff72e-13fe-4452-b503-74efebe5a751] Running
	I0307 21:49:14.714993    8582 system_pods.go:61] "metrics-server-69cf46c98-qzrq2" [026e1c6f-54a2-4f1b-83ab-9b6f24976fe3] Running
	I0307 21:49:14.714997    8582 system_pods.go:61] "nvidia-device-plugin-daemonset-skr6t" [bd96c754-234b-4da9-a225-d2510af33519] Running
	I0307 21:49:14.715001    8582 system_pods.go:61] "registry-g5s9h" [5acdf3e9-cdb0-4e0e-82bc-ce557a05d53f] Running
	I0307 21:49:14.715004    8582 system_pods.go:61] "registry-proxy-pn5c9" [0bc81ee0-ca94-4da6-9aa2-210e4709466d] Running
	I0307 21:49:14.715008    8582 system_pods.go:61] "snapshot-controller-58dbcc7b99-bqr6j" [67928c3e-8f43-49cf-b47f-8d32a9d92f23] Running
	I0307 21:49:14.715012    8582 system_pods.go:61] "snapshot-controller-58dbcc7b99-qjd59" [6b4b94ea-e889-4004-9dea-98d277831414] Running
	I0307 21:49:14.715016    8582 system_pods.go:61] "storage-provisioner" [60f33f3b-dabc-4b3a-a17f-ff46ff70601d] Running
	I0307 21:49:14.715022    8582 system_pods.go:74] duration metric: took 11.320976ms to wait for pod list to return data ...
	I0307 21:49:14.715033    8582 default_sa.go:34] waiting for default service account to be created ...
	I0307 21:49:14.717631    8582 default_sa.go:45] found service account: "default"
	I0307 21:49:14.717655    8582 default_sa.go:55] duration metric: took 2.609294ms for default service account to be created ...
	I0307 21:49:14.717664    8582 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 21:49:14.728649    8582 system_pods.go:86] 18 kube-system pods found
	I0307 21:49:14.728684    8582 system_pods.go:89] "coredns-5dd5756b68-29fkp" [f8bab481-83f1-4618-a41d-c1f4f52609ab] Running
	I0307 21:49:14.728692    8582 system_pods.go:89] "csi-hostpath-attacher-0" [ae25642c-4c09-4ba4-986a-fc0894df39a3] Running
	I0307 21:49:14.728697    8582 system_pods.go:89] "csi-hostpath-resizer-0" [4e28b525-6966-4a98-9c3d-b3bc7b156676] Running
	I0307 21:49:14.728706    8582 system_pods.go:89] "csi-hostpathplugin-7rl2z" [f5001b06-75f0-4446-b62f-e7de26581524] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0307 21:49:14.728712    8582 system_pods.go:89] "etcd-addons-963512" [6b9f7896-fb74-4fcf-ae8a-65f8a002118e] Running
	I0307 21:49:14.728717    8582 system_pods.go:89] "kindnet-ch46s" [92a51680-a4d1-4636-b15d-185301519096] Running
	I0307 21:49:14.728721    8582 system_pods.go:89] "kube-apiserver-addons-963512" [2618ff7b-95ab-4a4a-9343-652bbd3c76c7] Running
	I0307 21:49:14.728726    8582 system_pods.go:89] "kube-controller-manager-addons-963512" [5ca9551f-1d7e-4be1-ae8b-67c3d148f01f] Running
	I0307 21:49:14.728733    8582 system_pods.go:89] "kube-ingress-dns-minikube" [9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0307 21:49:14.728740    8582 system_pods.go:89] "kube-proxy-w6gxd" [7fead251-876a-464c-94cf-c12167cd82af] Running
	I0307 21:49:14.728745    8582 system_pods.go:89] "kube-scheduler-addons-963512" [6b4ff72e-13fe-4452-b503-74efebe5a751] Running
	I0307 21:49:14.728753    8582 system_pods.go:89] "metrics-server-69cf46c98-qzrq2" [026e1c6f-54a2-4f1b-83ab-9b6f24976fe3] Running
	I0307 21:49:14.728757    8582 system_pods.go:89] "nvidia-device-plugin-daemonset-skr6t" [bd96c754-234b-4da9-a225-d2510af33519] Running
	I0307 21:49:14.728761    8582 system_pods.go:89] "registry-g5s9h" [5acdf3e9-cdb0-4e0e-82bc-ce557a05d53f] Running
	I0307 21:49:14.728772    8582 system_pods.go:89] "registry-proxy-pn5c9" [0bc81ee0-ca94-4da6-9aa2-210e4709466d] Running
	I0307 21:49:14.728778    8582 system_pods.go:89] "snapshot-controller-58dbcc7b99-bqr6j" [67928c3e-8f43-49cf-b47f-8d32a9d92f23] Running
	I0307 21:49:14.728784    8582 system_pods.go:89] "snapshot-controller-58dbcc7b99-qjd59" [6b4b94ea-e889-4004-9dea-98d277831414] Running
	I0307 21:49:14.728794    8582 system_pods.go:89] "storage-provisioner" [60f33f3b-dabc-4b3a-a17f-ff46ff70601d] Running
	I0307 21:49:14.728800    8582 system_pods.go:126] duration metric: took 11.131685ms to wait for k8s-apps to be running ...
	I0307 21:49:14.728807    8582 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 21:49:14.728864    8582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 21:49:14.745263    8582 system_svc.go:56] duration metric: took 16.446977ms WaitForService to wait for kubelet
	I0307 21:49:14.745292    8582 kubeadm.go:576] duration metric: took 44.354441105s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 21:49:14.745310    8582 node_conditions.go:102] verifying NodePressure condition ...
	I0307 21:49:14.748485    8582 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0307 21:49:14.748513    8582 node_conditions.go:123] node cpu capacity is 2
	I0307 21:49:14.748525    8582 node_conditions.go:105] duration metric: took 3.209963ms to run NodePressure ...
	I0307 21:49:14.748539    8582 start.go:240] waiting for startup goroutines ...
	I0307 21:49:14.836849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:14.897631    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:15.101204    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:15.333797    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:15.397070    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:15.600875    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:15.831268    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:15.896861    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:16.101131    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:16.330606    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:16.401334    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:16.600920    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:16.830247    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:16.897935    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:17.100593    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:17.331207    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:17.396950    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:17.601797    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:17.830522    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:17.896826    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:18.109248    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:18.332316    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:18.396530    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:18.600800    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:18.830422    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:18.896557    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:19.101508    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:19.331066    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:19.395580    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:19.601130    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:19.831472    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:19.898106    8582 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0307 21:49:20.102219    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:20.335130    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:20.398399    8582 kapi.go:107] duration metric: took 41.006796118s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0307 21:49:20.601523    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:20.830490    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:21.101125    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:21.331539    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:21.601094    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:21.831815    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0307 21:49:22.101837    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:22.330849    8582 kapi.go:107] duration metric: took 40.004360558s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0307 21:49:22.332575    8582 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-963512 cluster.
	I0307 21:49:22.334680    8582 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0307 21:49:22.336493    8582 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0307 21:49:22.601312    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:23.102082    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:23.601012    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:24.104931    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:24.600239    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:25.115647    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:25.601391    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:26.100849    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:26.603245    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:27.101534    8582 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0307 21:49:27.601303    8582 kapi.go:107] duration metric: took 47.015922702s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0307 21:49:27.603818    8582 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0307 21:49:27.606247    8582 addons.go:505] duration metric: took 57.215094396s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0307 21:49:27.606296    8582 start.go:245] waiting for cluster config update ...
	I0307 21:49:27.606315    8582 start.go:254] writing updated cluster config ...
	I0307 21:49:27.606997    8582 ssh_runner.go:195] Run: rm -f paused
	I0307 21:49:27.943787    8582 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0307 21:49:27.946985    8582 out.go:177] * Done! kubectl is now configured to use "addons-963512" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	3ea9e182c1b34       dd1b12fcb6097       6 seconds ago        Exited              hello-world-app              2                   1b6e54802d428       hello-world-app-5d77478584-frj9x
	4b5ce7ca15eab       be5e6f23a9904       29 seconds ago       Running             nginx                        0                   0e97238db2926       nginx
	3a90d0e29e456       bafe72500920c       About a minute ago   Running             gcp-auth                     0                   206a4bf8e18c1       gcp-auth-5f6b4f85fd-9g9vb
	30d5e600e3f06       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr     0                   56185c8410ca1       nvidia-device-plugin-daemonset-skr6t
	a24e7358e4b27       1a024e390dd05       About a minute ago   Exited              patch                        1                   eb3bb03f287ec       ingress-nginx-admission-patch-mv9ww
	5e9124a23573d       1a024e390dd05       About a minute ago   Exited              create                       0                   b08b4cbd6bb77       ingress-nginx-admission-create-gn8tt
	7c00de7883860       7ce2150c8929b       About a minute ago   Running             local-path-provisioner       0                   3bf5080f6b9e6       local-path-provisioner-78b46b4d5c-tcxbn
	a476486d2a80a       20e3f2db01e81       About a minute ago   Running             yakd                         0                   92db516cf69fb       yakd-dashboard-9947fc6bf-dw8s7
	2b23d0b44dd54       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   94919244f42d5       snapshot-controller-58dbcc7b99-bqr6j
	cc95bc26ba37b       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller   0                   211b3ebb631e5       snapshot-controller-58dbcc7b99-qjd59
	bae1f86f699e7       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator       0                   1c53186494ddc       cloud-spanner-emulator-6548d5df46-c4d87
	2a9c63a0cc630       97e04611ad434       About a minute ago   Running             coredns                      0                   f1603b63aad7f       coredns-5dd5756b68-29fkp
	79369d5a315db       ba04bb24b9575       About a minute ago   Running             storage-provisioner          0                   ea35e45248780       storage-provisioner
	9733620690c83       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                  0                   7a93b69dacb02       kindnet-ch46s
	122c40b47440e       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                   0                   034d2ca9d7a01       kube-proxy-w6gxd
	971300809d6dc       9961cbceaf234       2 minutes ago        Running             kube-controller-manager      0                   0134843e7b9b9       kube-controller-manager-addons-963512
	c93588c0a8cce       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver               0                   885e346178767       kube-apiserver-addons-963512
	a122979188eac       05c284c929889       2 minutes ago        Running             kube-scheduler               0                   5f1b15db14af8       kube-scheduler-addons-963512
	920a86d849da2       9cdd6470f48c8       2 minutes ago        Running             etcd                         0                   2c43d971ea471       etcd-addons-963512
	
	
	==> containerd <==
	Mar 07 21:50:29 addons-963512 containerd[764]: time="2024-03-07T21:50:29.702900950Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:29 addons-963512 containerd[764]: time="2024-03-07T21:50:29.711869379Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8897 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.628034993Z" level=info msg="RemoveContainer for \"70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3\""
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.635012998Z" level=info msg="RemoveContainer for \"70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3\" returns successfully"
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.640240782Z" level=info msg="RemoveContainer for \"8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894\""
	Mar 07 21:50:30 addons-963512 containerd[764]: time="2024-03-07T21:50:30.652970533Z" level=info msg="RemoveContainer for \"8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894\" returns successfully"
	Mar 07 21:50:32 addons-963512 containerd[764]: time="2024-03-07T21:50:32.330035877Z" level=info msg="StopContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" with timeout 2 (s)"
	Mar 07 21:50:32 addons-963512 containerd[764]: time="2024-03-07T21:50:32.330755168Z" level=info msg="Stop container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" with signal terminated"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.336969648Z" level=info msg="Kill container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402356214Z" level=info msg="shim disconnected" id=fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402434343Z" level=warning msg="cleaning up after shim disconnected" id=fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b namespace=k8s.io
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.402448152Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.410854155Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9035 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.413622143Z" level=info msg="StopContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.414226932Z" level=info msg="StopPodSandbox for \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.414284245Z" level=info msg="Container to stop \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446376013Z" level=info msg="shim disconnected" id=6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446445059Z" level=warning msg="cleaning up after shim disconnected" id=6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e namespace=k8s.io
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.446457013Z" level=info msg="cleaning up dead shim"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.454448282Z" level=warning msg="cleanup warnings time=\"2024-03-07T21:50:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9067 runtime=io.containerd.runc.v2\n"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.500141509Z" level=info msg="TearDown network for sandbox \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\" successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.500194932Z" level=info msg="StopPodSandbox for \"6ed15c39d83e2e8de61a3692fa7104605328f53a213636f28b2196f98f08930e\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.647591739Z" level=info msg="RemoveContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\""
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.653194251Z" level=info msg="RemoveContainer for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" returns successfully"
	Mar 07 21:50:34 addons-963512 containerd[764]: time="2024-03-07T21:50:34.653713461Z" level=error msg="ContainerStatus for \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found"
	
	
	==> coredns [2a9c63a0cc63048424bec8ba57f42b8cb9041975eefe514e6cb37a648b93960d] <==
	[INFO] 10.244.0.19:36288 - 3680 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000221398s
	[INFO] 10.244.0.19:56502 - 21815 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002681441s
	[INFO] 10.244.0.19:36288 - 16475 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002725527s
	[INFO] 10.244.0.19:56502 - 23750 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002559086s
	[INFO] 10.244.0.19:36288 - 42492 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001481495s
	[INFO] 10.244.0.19:36288 - 22817 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000222752s
	[INFO] 10.244.0.19:56502 - 38253 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000081075s
	[INFO] 10.244.0.19:39421 - 53845 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00013723s
	[INFO] 10.244.0.19:38203 - 51947 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000244011s
	[INFO] 10.244.0.19:39421 - 30606 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00022116s
	[INFO] 10.244.0.19:39421 - 55212 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000131626s
	[INFO] 10.244.0.19:38203 - 41667 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000193854s
	[INFO] 10.244.0.19:38203 - 60606 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118924s
	[INFO] 10.244.0.19:39421 - 23545 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000145287s
	[INFO] 10.244.0.19:39421 - 10032 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000137238s
	[INFO] 10.244.0.19:38203 - 56967 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000126383s
	[INFO] 10.244.0.19:39421 - 26792 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000073468s
	[INFO] 10.244.0.19:38203 - 2566 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000158916s
	[INFO] 10.244.0.19:38203 - 17146 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000105042s
	[INFO] 10.244.0.19:39421 - 58842 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001501269s
	[INFO] 10.244.0.19:38203 - 54925 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001524933s
	[INFO] 10.244.0.19:39421 - 26355 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001449372s
	[INFO] 10.244.0.19:39421 - 9046 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000047573s
	[INFO] 10.244.0.19:38203 - 21924 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002268773s
	[INFO] 10.244.0.19:38203 - 30052 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000048385s
	
	
	==> describe nodes <==
	Name:               addons-963512
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-963512
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=addons-963512
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T21_48_17_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-963512
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 21:48:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-963512
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 21:50:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 21:50:19 +0000   Thu, 07 Mar 2024 21:48:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-963512
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb4fa9a1690241dd8c2e79b58eaaf79a
	  System UUID:                1874d978-b1e3-427a-849f-c838a3c17338
	  Boot ID:                    5a38287e-066f-43b8-a303-a60cdb318f8a
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-c4d87    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m1s
	  default                     hello-world-app-5d77478584-frj9x           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-5f6b4f85fd-9g9vb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 coredns-5dd5756b68-29fkp                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m6s
	  kube-system                 etcd-addons-963512                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m20s
	  kube-system                 kindnet-ch46s                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m6s
	  kube-system                 kube-apiserver-addons-963512               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-controller-manager-addons-963512      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-proxy-w6gxd                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-scheduler-addons-963512               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 nvidia-device-plugin-daemonset-skr6t       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 snapshot-controller-58dbcc7b99-bqr6j       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 snapshot-controller-58dbcc7b99-qjd59       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  local-path-storage          local-path-provisioner-78b46b4d5c-tcxbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  yakd-dashboard              yakd-dashboard-9947fc6bf-dw8s7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m3s   kube-proxy       
	  Normal  Starting                 2m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m20s  kubelet          Node addons-963512 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m20s  kubelet          Node addons-963512 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m20s  kubelet          Node addons-963512 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m20s  kubelet          Node addons-963512 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m20s  kubelet          Node addons-963512 status is now: NodeReady
	  Normal  RegisteredNode           2m6s   node-controller  Node addons-963512 event: Registered Node addons-963512 in Controller
	
	
	==> dmesg <==
	[Mar 7 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015962] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.450159] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002995] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004021] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.054186] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004024] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.715890] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.628665] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [920a86d849da2798ccea8cfd1b820820179d081b0c0572d8287ceb2be6fc9b6b] <==
	{"level":"info","ts":"2024-03-07T21:48:09.566821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-07T21:48:09.56689Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-07T21:48:09.567894Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-07T21:48:09.56832Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T21:48:09.568337Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-07T21:48:09.568649Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-07T21:48:09.568673Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-07T21:48:10.460316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460461Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-07T21:48:10.460506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.46054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.460579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.46062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-07T21:48:10.468456Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-963512 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-07T21:48:10.468562Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:48:10.469616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-07T21:48:10.469853Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.472232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-07T21:48:10.473218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-07T21:48:10.484437Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.484684Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.488232Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-07T21:48:10.484723Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-07T21:48:10.536488Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [3a90d0e29e4564c71421581edfef3805537dcad2b4682ebabe006499f46f7e07] <==
	2024/03/07 21:49:21 GCP Auth Webhook started!
	2024/03/07 21:49:39 Ready to marshal response ...
	2024/03/07 21:49:39 Ready to write response ...
	2024/03/07 21:50:01 Ready to marshal response ...
	2024/03/07 21:50:01 Ready to write response ...
	2024/03/07 21:50:01 Ready to marshal response ...
	2024/03/07 21:50:01 Ready to write response ...
	2024/03/07 21:50:13 Ready to marshal response ...
	2024/03/07 21:50:13 Ready to write response ...
	2024/03/07 21:50:18 Ready to marshal response ...
	2024/03/07 21:50:18 Ready to write response ...
	
	
	==> kernel <==
	 21:50:36 up 32 min,  0 users,  load average: 1.78, 1.06, 0.44
	Linux addons-963512 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [9733620690c83e7bf34ee884f9c825bad94e72799a184a757612dc15d19b71b1] <==
	I0307 21:48:34.911341       1 main.go:227] handling current node
	I0307 21:48:44.918023       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:48:44.918051       1 main.go:227] handling current node
	I0307 21:48:54.930169       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:48:54.930193       1 main.go:227] handling current node
	I0307 21:49:04.939610       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:04.939636       1 main.go:227] handling current node
	I0307 21:49:14.951656       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:14.951686       1 main.go:227] handling current node
	I0307 21:49:24.964106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:24.964137       1 main.go:227] handling current node
	I0307 21:49:34.973681       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:34.973709       1 main.go:227] handling current node
	I0307 21:49:44.984748       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:44.984776       1 main.go:227] handling current node
	I0307 21:49:54.996887       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:49:54.997086       1 main.go:227] handling current node
	I0307 21:50:05.006867       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:05.006909       1 main.go:227] handling current node
	I0307 21:50:15.022985       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:15.023015       1 main.go:227] handling current node
	I0307 21:50:25.035785       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:25.035821       1 main.go:227] handling current node
	I0307 21:50:35.047631       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0307 21:50:35.047664       1 main.go:227] handling current node
	
	
	==> kube-apiserver [c93588c0a8cce89a2f9d23a8ebdffd7238817b290581737e0684d75181f2a45c] <==
	I0307 21:48:40.119420       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.104.222.42"}
	I0307 21:48:40.151455       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	W0307 21:48:40.256850       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 21:48:40.343102       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.228.59"}
	W0307 21:48:41.139600       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 21:48:41.723797       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.206.167"}
	E0307 21:49:04.089430       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	W0307 21:49:04.089817       1 handler_proxy.go:93] no RequestInfo found in the context
	E0307 21:49:04.089964       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0307 21:49:04.091745       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	I0307 21:49:04.092030       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0307 21:49:04.097337       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1: Get "https://10.107.12.0:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.107.12.0:443: connect: connection refused
	I0307 21:49:04.208646       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0307 21:49:13.319888       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0307 21:49:55.880736       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0307 21:49:55.888843       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0307 21:49:56.912565       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0307 21:50:01.523381       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0307 21:50:01.884133       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.120.104"}
	I0307 21:50:05.108455       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0307 21:50:12.791143       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0307 21:50:13.662880       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.97.144"}
	
	
	==> kube-controller-manager [971300809d6dc1c69864b575af34a12e82c177822af19738bd95669b6ddf21d1] <==
	W0307 21:50:04.226711       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:04.226750       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:06.000547       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W0307 21:50:12.587955       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:12.587988       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:13.392624       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0307 21:50:13.411403       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-frj9x"
	I0307 21:50:13.431907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.077275ms"
	I0307 21:50:13.497035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.077332ms"
	I0307 21:50:13.541928       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.839052ms"
	I0307 21:50:13.542233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.348µs"
	I0307 21:50:14.963884       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:15.131128       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:16.540974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="64.78µs"
	I0307 21:50:17.550549       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0307 21:50:17.576639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.734µs"
	I0307 21:50:18.566137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="46.605µs"
	I0307 21:50:28.057918       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0307 21:50:28.169495       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	W0307 21:50:29.093458       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0307 21:50:29.093494       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0307 21:50:30.662073       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="91.872µs"
	I0307 21:50:31.302178       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0307 21:50:31.309590       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="7.672µs"
	I0307 21:50:31.312232       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	
	
	==> kube-proxy [122c40b47440e522ae9cdd9b8a1a45cab186bbcd20d893bb0dbe24c36c35a9a2] <==
	I0307 21:48:32.831030       1 server_others.go:69] "Using iptables proxy"
	I0307 21:48:32.849505       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0307 21:48:32.934192       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0307 21:48:32.945014       1 server_others.go:152] "Using iptables Proxier"
	I0307 21:48:32.945051       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0307 21:48:32.945059       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0307 21:48:32.945090       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0307 21:48:32.945296       1 server.go:846] "Version info" version="v1.28.4"
	I0307 21:48:32.945306       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0307 21:48:32.947456       1 config.go:188] "Starting service config controller"
	I0307 21:48:32.947471       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0307 21:48:32.947505       1 config.go:97] "Starting endpoint slice config controller"
	I0307 21:48:32.947511       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0307 21:48:32.949775       1 config.go:315] "Starting node config controller"
	I0307 21:48:32.949787       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0307 21:48:33.048438       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0307 21:48:33.048494       1 shared_informer.go:318] Caches are synced for service config
	I0307 21:48:33.049834       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a122979188eacd9de5ed8cadad536037c22516ea1fd49bb0f9b0981e82e62f82] <==
	W0307 21:48:14.335392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0307 21:48:14.335425       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0307 21:48:14.335473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 21:48:14.335490       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0307 21:48:14.335530       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 21:48:14.335548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0307 21:48:14.335583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.335599       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.337807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 21:48:14.337840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0307 21:48:14.337907       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 21:48:14.337933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0307 21:48:14.340016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.340044       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.340136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 21:48:14.340157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0307 21:48:14.340214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.340231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.340301       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 21:48:14.340323       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0307 21:48:14.342956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 21:48:14.343134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0307 21:48:14.343293       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 21:48:14.343341       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0307 21:48:15.726000       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.769345    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wc9j\" (UniqueName: \"kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j\") pod \"9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad\" (UID: \"9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad\") "
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.771353    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j" (OuterVolumeSpecName: "kube-api-access-4wc9j") pod "9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad" (UID: "9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad"). InnerVolumeSpecName "kube-api-access-4wc9j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:50:29 addons-963512 kubelet[1478]: I0307 21:50:29.869763    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4wc9j\" (UniqueName: \"kubernetes.io/projected/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad-kube-api-access-4wc9j\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.541702    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4e28b525-6966-4a98-9c3d-b3bc7b156676" path="/var/lib/kubelet/pods/4e28b525-6966-4a98-9c3d-b3bc7b156676/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.542098    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ae25642c-4c09-4ba4-986a-fc0894df39a3" path="/var/lib/kubelet/pods/ae25642c-4c09-4ba4-986a-fc0894df39a3/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.542448    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f5001b06-75f0-4446-b62f-e7de26581524" path="/var/lib/kubelet/pods/f5001b06-75f0-4446-b62f-e7de26581524/volumes"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.625891    1478 scope.go:117] "RemoveContainer" containerID="70496f1ab8ef7b866761277010419bc742ceefb80fd63f007777639fff5ca5e3"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.632165    1478 scope.go:117] "RemoveContainer" containerID="3ea9e182c1b3426f0d101b1a4d7d49a809df97812d487e0912b3075dac42f767"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: E0307 21:50:30.632520    1478 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-frj9x_default(b0811f24-911e-472f-aa20-a0f6dffff240)\"" pod="default/hello-world-app-5d77478584-frj9x" podUID="b0811f24-911e-472f-aa20-a0f6dffff240"
	Mar 07 21:50:30 addons-963512 kubelet[1478]: I0307 21:50:30.636457    1478 scope.go:117] "RemoveContainer" containerID="8e89db9594f54b2c2afb100adef116ef936275f2d00b4367cbcdb8cf7ac36894"
	Mar 07 21:50:31 addons-963512 kubelet[1478]: I0307 21:50:31.538366    1478 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-skr6t" secret="" err="secret \"gcp-auth\" not found"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.541401    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d45f7a9-05f2-418c-9f70-99b1c2bfa121" path="/var/lib/kubelet/pods/1d45f7a9-05f2-418c-9f70-99b1c2bfa121/volumes"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.541853    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="36f3af08-b59e-40ba-9a39-695007d9cb26" path="/var/lib/kubelet/pods/36f3af08-b59e-40ba-9a39-695007d9cb26/volumes"
	Mar 07 21:50:32 addons-963512 kubelet[1478]: I0307 21:50:32.542243    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad" path="/var/lib/kubelet/pods/9bb07d00-3bb1-44e2-b99e-2d1e7e05e2ad/volumes"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.608211    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swhsm\" (UniqueName: \"kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm\") pod \"2cb806f5-6cec-4632-874a-a41175208135\" (UID: \"2cb806f5-6cec-4632-874a-a41175208135\") "
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.608684    1478 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert\") pod \"2cb806f5-6cec-4632-874a-a41175208135\" (UID: \"2cb806f5-6cec-4632-874a-a41175208135\") "
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.610481    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm" (OuterVolumeSpecName: "kube-api-access-swhsm") pod "2cb806f5-6cec-4632-874a-a41175208135" (UID: "2cb806f5-6cec-4632-874a-a41175208135"). InnerVolumeSpecName "kube-api-access-swhsm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.613047    1478 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2cb806f5-6cec-4632-874a-a41175208135" (UID: "2cb806f5-6cec-4632-874a-a41175208135"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.645130    1478 scope.go:117] "RemoveContainer" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.653444    1478 scope.go:117] "RemoveContainer" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: E0307 21:50:34.653895    1478 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found" containerID="fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.653945    1478 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b"} err="failed to get container status \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa7d4d59097cb0d3d67776d7082916c9367684b40284d53e80efb5bd48a24b7b\": not found"
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.709325    1478 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-swhsm\" (UniqueName: \"kubernetes.io/projected/2cb806f5-6cec-4632-874a-a41175208135-kube-api-access-swhsm\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:34 addons-963512 kubelet[1478]: I0307 21:50:34.709367    1478 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2cb806f5-6cec-4632-874a-a41175208135-webhook-cert\") on node \"addons-963512\" DevicePath \"\""
	Mar 07 21:50:36 addons-963512 kubelet[1478]: I0307 21:50:36.542285    1478 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2cb806f5-6cec-4632-874a-a41175208135" path="/var/lib/kubelet/pods/2cb806f5-6cec-4632-874a-a41175208135/volumes"
	
	
	==> storage-provisioner [79369d5a315db50e2a00e1987fc2f9372358f87dcf1ff0177ac1f3a1d9f09159] <==
	I0307 21:48:38.179571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 21:48:38.202176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 21:48:38.202224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 21:48:38.213360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 21:48:38.215872       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327!
	I0307 21:48:38.215945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ce7f5935-4b9c-4b6b-b5e5-91cca2dff8fa", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327 became leader
	I0307 21:48:38.319625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-963512_b356bae8-d98a-4b20-835b-1cdd2b0d8327!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-963512 -n addons-963512
helpers_test.go:261: (dbg) Run:  kubectl --context addons-963512 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (69.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr: (3.84280823s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-894723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr
2024/03/07 21:56:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr: (4.127693205s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-894723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.657215203s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-894723
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 image load --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr: (3.100910776s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-894723" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image save gcr.io/google-containers/addon-resizer:functional-894723 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0307 21:56:43.652141   40728 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:56:43.652447   40728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:43.652477   40728 out.go:304] Setting ErrFile to fd 2...
	I0307 21:56:43.652495   40728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:43.652775   40728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:56:43.653483   40728 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:56:43.653653   40728 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:56:43.654194   40728 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
	I0307 21:56:43.676579   40728 ssh_runner.go:195] Run: systemctl --version
	I0307 21:56:43.676668   40728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
	I0307 21:56:43.691763   40728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
	I0307 21:56:43.792546   40728 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0307 21:56:43.792606   40728 cache_images.go:254] Failed to load cached images for profile functional-894723. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0307 21:56:43.792639   40728 cache_images.go:262] succeeded pushing to: 
	I0307 21:56:43.792645   40728 cache_images.go:263] failed pushing to: functional-894723

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (375.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-497253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0307 22:33:48.560445    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:34:28.003699    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-497253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m11.376429535s)

                                                
                                                
-- stdout --
	* [old-k8s-version-497253] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-497253" primary control-plane node in "old-k8s-version-497253" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-497253" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-497253 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:33:37.520916  203173 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:33:37.521037  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:33:37.521047  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:33:37.521053  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:33:37.521323  203173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:33:37.521682  203173 out.go:298] Setting JSON to false
	I0307 22:33:37.522539  203173 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4561,"bootTime":1709846257,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 22:33:37.522615  203173 start.go:139] virtualization:  
	I0307 22:33:37.526742  203173 out.go:177] * [old-k8s-version-497253] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 22:33:37.529143  203173 notify.go:220] Checking for updates...
	I0307 22:33:37.532153  203173 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 22:33:37.534078  203173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 22:33:37.536033  203173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:33:37.538144  203173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 22:33:37.540391  203173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 22:33:37.542283  203173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 22:33:37.544682  203173 config.go:182] Loaded profile config "old-k8s-version-497253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 22:33:37.547030  203173 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0307 22:33:37.548958  203173 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:33:37.572032  203173 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 22:33:37.572143  203173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:33:37.670210  203173 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-07 22:33:37.65882932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:33:37.670417  203173 docker.go:295] overlay module found
	I0307 22:33:37.673064  203173 out.go:177] * Using the docker driver based on existing profile
	I0307 22:33:37.675039  203173 start.go:297] selected driver: docker
	I0307 22:33:37.675055  203173 start.go:901] validating driver "docker" against &{Name:old-k8s-version-497253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497253 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:33:37.675165  203173 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 22:33:37.675782  203173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:33:37.753720  203173 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-07 22:33:37.74447346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:33:37.754061  203173 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 22:33:37.754114  203173 cni.go:84] Creating CNI manager for ""
	I0307 22:33:37.754124  203173 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 22:33:37.754174  203173 start.go:340] cluster config:
	{Name:old-k8s-version-497253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:33:37.758977  203173 out.go:177] * Starting "old-k8s-version-497253" primary control-plane node in "old-k8s-version-497253" cluster
	I0307 22:33:37.761555  203173 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 22:33:37.763983  203173 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 22:33:37.766430  203173 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 22:33:37.766559  203173 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 22:33:37.766672  203173 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 22:33:37.766716  203173 cache.go:56] Caching tarball of preloaded images
	I0307 22:33:37.766892  203173 preload.go:173] Found /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0307 22:33:37.766913  203173 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0307 22:33:37.767123  203173 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/config.json ...
	I0307 22:33:37.801786  203173 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 22:33:37.801808  203173 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 22:33:37.801825  203173 cache.go:194] Successfully downloaded all kic artifacts
	I0307 22:33:37.801853  203173 start.go:360] acquireMachinesLock for old-k8s-version-497253: {Name:mk272fd05919b45bf55ab72823af0f8539dbef2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:33:37.801910  203173 start.go:364] duration metric: took 38.023µs to acquireMachinesLock for "old-k8s-version-497253"
	I0307 22:33:37.801936  203173 start.go:96] Skipping create...Using existing machine configuration
	I0307 22:33:37.801941  203173 fix.go:54] fixHost starting: 
	I0307 22:33:37.802218  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:37.831469  203173 fix.go:112] recreateIfNeeded on old-k8s-version-497253: state=Stopped err=<nil>
	W0307 22:33:37.831510  203173 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 22:33:37.836854  203173 out.go:177] * Restarting existing docker container for "old-k8s-version-497253" ...
	I0307 22:33:37.838860  203173 cli_runner.go:164] Run: docker start old-k8s-version-497253
	I0307 22:33:38.235309  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:38.266234  203173 kic.go:430] container "old-k8s-version-497253" state is running.
	I0307 22:33:38.266620  203173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-497253
	I0307 22:33:38.291715  203173 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/config.json ...
	I0307 22:33:38.291940  203173 machine.go:94] provisionDockerMachine start ...
	I0307 22:33:38.291995  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:38.321612  203173 main.go:141] libmachine: Using SSH client type: native
	I0307 22:33:38.321883  203173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33067 <nil> <nil>}
	I0307 22:33:38.321895  203173 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 22:33:38.322425  203173 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56046->127.0.0.1:33067: read: connection reset by peer
	I0307 22:33:41.447669  203173 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-497253
	
	I0307 22:33:41.447721  203173 ubuntu.go:169] provisioning hostname "old-k8s-version-497253"
	I0307 22:33:41.447788  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:41.463980  203173 main.go:141] libmachine: Using SSH client type: native
	I0307 22:33:41.464228  203173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33067 <nil> <nil>}
	I0307 22:33:41.464245  203173 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-497253 && echo "old-k8s-version-497253" | sudo tee /etc/hostname
	I0307 22:33:41.607450  203173 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-497253
	
	I0307 22:33:41.607543  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:41.626478  203173 main.go:141] libmachine: Using SSH client type: native
	I0307 22:33:41.626811  203173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33067 <nil> <nil>}
	I0307 22:33:41.626837  203173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-497253' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-497253/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-497253' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 22:33:41.757125  203173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 22:33:41.757215  203173 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18320-2408/.minikube CaCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18320-2408/.minikube}
	I0307 22:33:41.757242  203173 ubuntu.go:177] setting up certificates
	I0307 22:33:41.757251  203173 provision.go:84] configureAuth start
	I0307 22:33:41.757324  203173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-497253
	I0307 22:33:41.774454  203173 provision.go:143] copyHostCerts
	I0307 22:33:41.774519  203173 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem, removing ...
	I0307 22:33:41.774541  203173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem
	I0307 22:33:41.774633  203173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem (1078 bytes)
	I0307 22:33:41.774731  203173 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem, removing ...
	I0307 22:33:41.774741  203173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem
	I0307 22:33:41.774770  203173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem (1123 bytes)
	I0307 22:33:41.774826  203173 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem, removing ...
	I0307 22:33:41.774833  203173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem
	I0307 22:33:41.774864  203173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem (1675 bytes)
	I0307 22:33:41.774919  203173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-497253 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-497253]
	I0307 22:33:42.240409  203173 provision.go:177] copyRemoteCerts
	I0307 22:33:42.240544  203173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 22:33:42.240628  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:42.260173  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:42.358244  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 22:33:42.384218  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0307 22:33:42.409480  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 22:33:42.437462  203173 provision.go:87] duration metric: took 680.197298ms to configureAuth
	I0307 22:33:42.437535  203173 ubuntu.go:193] setting minikube options for container-runtime
	I0307 22:33:42.437798  203173 config.go:182] Loaded profile config "old-k8s-version-497253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 22:33:42.437829  203173 machine.go:97] duration metric: took 4.145881451s to provisionDockerMachine
	I0307 22:33:42.437862  203173 start.go:293] postStartSetup for "old-k8s-version-497253" (driver="docker")
	I0307 22:33:42.437889  203173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 22:33:42.438009  203173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 22:33:42.438074  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:42.454665  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:42.549164  203173 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 22:33:42.552153  203173 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 22:33:42.552191  203173 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 22:33:42.552201  203173 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 22:33:42.552208  203173 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 22:33:42.552224  203173 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/addons for local assets ...
	I0307 22:33:42.552308  203173 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/files for local assets ...
	I0307 22:33:42.552405  203173 filesync.go:149] local asset: /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem -> 77642.pem in /etc/ssl/certs
	I0307 22:33:42.552515  203173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 22:33:42.560853  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem --> /etc/ssl/certs/77642.pem (1708 bytes)
	I0307 22:33:42.585157  203173 start.go:296] duration metric: took 147.264367ms for postStartSetup
	I0307 22:33:42.585285  203173 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:33:42.585375  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:42.605899  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:42.696824  203173 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 22:33:42.701241  203173 fix.go:56] duration metric: took 4.899293916s for fixHost
	I0307 22:33:42.701270  203173 start.go:83] releasing machines lock for "old-k8s-version-497253", held for 4.899351253s
	I0307 22:33:42.701348  203173 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-497253
	I0307 22:33:42.718888  203173 ssh_runner.go:195] Run: cat /version.json
	I0307 22:33:42.718944  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:42.719001  203173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 22:33:42.719064  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:42.737025  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:42.748363  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:42.942478  203173 ssh_runner.go:195] Run: systemctl --version
	I0307 22:33:42.946846  203173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 22:33:42.951141  203173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 22:33:42.968495  203173 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 22:33:42.968571  203173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 22:33:42.977244  203173 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 22:33:42.977278  203173 start.go:494] detecting cgroup driver to use...
	I0307 22:33:42.977330  203173 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 22:33:42.977399  203173 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 22:33:42.993667  203173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 22:33:43.008461  203173 docker.go:217] disabling cri-docker service (if available) ...
	I0307 22:33:43.008529  203173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 22:33:43.022825  203173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 22:33:43.034815  203173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 22:33:43.125205  203173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 22:33:43.232214  203173 docker.go:233] disabling docker service ...
	I0307 22:33:43.232303  203173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 22:33:43.246233  203173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 22:33:43.258211  203173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 22:33:43.373408  203173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 22:33:43.467970  203173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 22:33:43.479610  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:33:43.495961  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0307 22:33:43.506100  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 22:33:43.519913  203173 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 22:33:43.520009  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 22:33:43.531606  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:33:43.542438  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 22:33:43.553110  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:33:43.563620  203173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 22:33:43.573843  203173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 22:33:43.583467  203173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 22:33:43.592261  203173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 22:33:43.602763  203173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:33:43.699516  203173 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 22:33:43.869994  203173 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 22:33:43.870062  203173 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 22:33:43.875251  203173 start.go:562] Will wait 60s for crictl version
	I0307 22:33:43.875314  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:33:43.879121  203173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 22:33:43.918981  203173 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 22:33:43.919063  203173 ssh_runner.go:195] Run: containerd --version
	I0307 22:33:43.941710  203173 ssh_runner.go:195] Run: containerd --version
	I0307 22:33:43.969199  203173 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0307 22:33:43.970890  203173 cli_runner.go:164] Run: docker network inspect old-k8s-version-497253 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 22:33:43.985333  203173 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0307 22:33:43.990181  203173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:33:44.004046  203173 kubeadm.go:877] updating cluster {Name:old-k8s-version-497253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497253 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 22:33:44.004192  203173 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 22:33:44.004262  203173 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 22:33:44.050751  203173 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 22:33:44.050779  203173 containerd.go:519] Images already preloaded, skipping extraction
	I0307 22:33:44.050853  203173 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 22:33:44.087263  203173 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 22:33:44.087288  203173 cache_images.go:84] Images are preloaded, skipping loading
	I0307 22:33:44.087297  203173 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0307 22:33:44.087419  203173 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-497253 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 22:33:44.087489  203173 ssh_runner.go:195] Run: sudo crictl info
	I0307 22:33:44.124722  203173 cni.go:84] Creating CNI manager for ""
	I0307 22:33:44.124748  203173 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 22:33:44.124758  203173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 22:33:44.124779  203173 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-497253 NodeName:old-k8s-version-497253 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0307 22:33:44.124916  203173 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-497253"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 22:33:44.124990  203173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0307 22:33:44.133783  203173 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 22:33:44.133855  203173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 22:33:44.142271  203173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0307 22:33:44.160371  203173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0307 22:33:44.178020  203173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0307 22:33:44.199193  203173 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0307 22:33:44.202611  203173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:33:44.213161  203173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:33:44.310236  203173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:33:44.325404  203173 certs.go:68] Setting up /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253 for IP: 192.168.76.2
	I0307 22:33:44.325428  203173 certs.go:194] generating shared ca certs ...
	I0307 22:33:44.325444  203173 certs.go:226] acquiring lock for ca certs: {Name:mk7f303c61c8508a802bee4114a394243ccd109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:33:44.325588  203173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key
	I0307 22:33:44.325635  203173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key
	I0307 22:33:44.325646  203173 certs.go:256] generating profile certs ...
	I0307 22:33:44.325739  203173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.key
	I0307 22:33:44.325821  203173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/apiserver.key.897d491f
	I0307 22:33:44.325865  203173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/proxy-client.key
	I0307 22:33:44.325978  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764.pem (1338 bytes)
	W0307 22:33:44.326011  203173 certs.go:480] ignoring /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764_empty.pem, impossibly tiny 0 bytes
	I0307 22:33:44.326024  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 22:33:44.326048  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem (1078 bytes)
	I0307 22:33:44.326079  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem (1123 bytes)
	I0307 22:33:44.326103  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem (1675 bytes)
	I0307 22:33:44.326149  203173 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem (1708 bytes)
	I0307 22:33:44.326841  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 22:33:44.353373  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 22:33:44.378337  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 22:33:44.403309  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 22:33:44.429920  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0307 22:33:44.457103  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 22:33:44.488287  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 22:33:44.512905  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0307 22:33:44.537664  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 22:33:44.561044  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764.pem --> /usr/share/ca-certificates/7764.pem (1338 bytes)
	I0307 22:33:44.584442  203173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem --> /usr/share/ca-certificates/77642.pem (1708 bytes)
	I0307 22:33:44.611185  203173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 22:33:44.629043  203173 ssh_runner.go:195] Run: openssl version
	I0307 22:33:44.634348  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 22:33:44.644257  203173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:33:44.648447  203173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:33:44.648514  203173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:33:44.658108  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 22:33:44.667169  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7764.pem && ln -fs /usr/share/ca-certificates/7764.pem /etc/ssl/certs/7764.pem"
	I0307 22:33:44.676185  203173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7764.pem
	I0307 22:33:44.679451  203173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 21:53 /usr/share/ca-certificates/7764.pem
	I0307 22:33:44.679511  203173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7764.pem
	I0307 22:33:44.687470  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7764.pem /etc/ssl/certs/51391683.0"
	I0307 22:33:44.696432  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77642.pem && ln -fs /usr/share/ca-certificates/77642.pem /etc/ssl/certs/77642.pem"
	I0307 22:33:44.705440  203173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77642.pem
	I0307 22:33:44.708800  203173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 21:53 /usr/share/ca-certificates/77642.pem
	I0307 22:33:44.708879  203173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77642.pem
	I0307 22:33:44.715537  203173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77642.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 22:33:44.723763  203173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 22:33:44.727038  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 22:33:44.733567  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 22:33:44.740043  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 22:33:44.748374  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 22:33:44.755282  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 22:33:44.761907  203173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 22:33:44.768346  203173 kubeadm.go:391] StartCluster: {Name:old-k8s-version-497253 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-497253 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:33:44.768442  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 22:33:44.768503  203173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 22:33:44.803721  203173 cri.go:89] found id: "a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:33:44.803785  203173 cri.go:89] found id: "68635283ca2e1d4364e11f919f9201c3e96e21e881f64632d2d769dde9a088e4"
	I0307 22:33:44.803804  203173 cri.go:89] found id: "e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:33:44.803825  203173 cri.go:89] found id: "f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:33:44.803860  203173 cri.go:89] found id: "cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:33:44.803880  203173 cri.go:89] found id: "2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:33:44.803897  203173 cri.go:89] found id: "bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:33:44.803918  203173 cri.go:89] found id: "dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:33:44.803938  203173 cri.go:89] found id: ""
	I0307 22:33:44.804015  203173 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0307 22:33:44.817503  203173 cri.go:116] JSON = null
	W0307 22:33:44.817587  203173 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0307 22:33:44.817668  203173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 22:33:44.826369  203173 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 22:33:44.826392  203173 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 22:33:44.826399  203173 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 22:33:44.826468  203173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 22:33:44.834818  203173 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 22:33:44.835270  203173 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-497253" does not appear in /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:33:44.835402  203173 kubeconfig.go:62] /home/jenkins/minikube-integration/18320-2408/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-497253" cluster setting kubeconfig missing "old-k8s-version-497253" context setting]
	I0307 22:33:44.835729  203173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:33:44.836881  203173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 22:33:44.846022  203173 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0307 22:33:44.846055  203173 kubeadm.go:591] duration metric: took 19.650661ms to restartPrimaryControlPlane
	I0307 22:33:44.846065  203173 kubeadm.go:393] duration metric: took 77.729844ms to StartCluster
	I0307 22:33:44.846080  203173 settings.go:142] acquiring lock: {Name:mk6b824c86d3c8cffe443e44d2dcdf6ba75674f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:33:44.846137  203173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:33:44.846751  203173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:33:44.846946  203173 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 22:33:44.850660  203173 out.go:177] * Verifying Kubernetes components...
	I0307 22:33:44.847243  203173 config.go:182] Loaded profile config "old-k8s-version-497253": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0307 22:33:44.847267  203173 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 22:33:44.852681  203173 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-497253"
	I0307 22:33:44.852704  203173 addons.go:69] Setting dashboard=true in profile "old-k8s-version-497253"
	I0307 22:33:44.852732  203173 addons.go:234] Setting addon dashboard=true in "old-k8s-version-497253"
	W0307 22:33:44.852745  203173 addons.go:243] addon dashboard should already be in state true
	I0307 22:33:44.852737  203173 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-497253"
	W0307 22:33:44.852771  203173 addons.go:243] addon storage-provisioner should already be in state true
	I0307 22:33:44.852821  203173 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-497253"
	I0307 22:33:44.852827  203173 host.go:66] Checking if "old-k8s-version-497253" exists ...
	I0307 22:33:44.852771  203173 host.go:66] Checking if "old-k8s-version-497253" exists ...
	I0307 22:33:44.853324  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:44.852847  203173 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-497253"
	W0307 22:33:44.853384  203173 addons.go:243] addon metrics-server should already be in state true
	I0307 22:33:44.853414  203173 host.go:66] Checking if "old-k8s-version-497253" exists ...
	I0307 22:33:44.853798  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:44.853324  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:44.852853  203173 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-497253"
	I0307 22:33:44.860552  203173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-497253"
	I0307 22:33:44.860881  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:44.852689  203173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:33:44.889679  203173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0307 22:33:44.893291  203173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 22:33:44.893260  203173 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 22:33:44.895760  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 22:33:44.895774  203173 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:33:44.895788  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 22:33:44.895829  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:44.895834  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:44.914557  203173 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0307 22:33:44.916729  203173 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0307 22:33:44.918986  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0307 22:33:44.919014  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0307 22:33:44.919085  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:44.933374  203173 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-497253"
	W0307 22:33:44.933398  203173 addons.go:243] addon default-storageclass should already be in state true
	I0307 22:33:44.933422  203173 host.go:66] Checking if "old-k8s-version-497253" exists ...
	I0307 22:33:44.933832  203173 cli_runner.go:164] Run: docker container inspect old-k8s-version-497253 --format={{.State.Status}}
	I0307 22:33:44.956401  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:44.996472  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:45.006852  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:45.020536  203173 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 22:33:45.020572  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 22:33:45.020664  203173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-497253
	I0307 22:33:45.036595  203173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:33:45.047926  203173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33067 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/old-k8s-version-497253/id_rsa Username:docker}
	I0307 22:33:45.062831  203173 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-497253" to be "Ready" ...
	I0307 22:33:45.131095  203173 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 22:33:45.131136  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0307 22:33:45.171174  203173 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 22:33:45.171251  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 22:33:45.188701  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0307 22:33:45.188788  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0307 22:33:45.200848  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:33:45.209675  203173 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:33:45.209705  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 22:33:45.234125  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0307 22:33:45.234177  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0307 22:33:45.246407  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:33:45.292446  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:33:45.359177  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0307 22:33:45.359203  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0307 22:33:45.529212  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0307 22:33:45.529235  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0307 22:33:45.651171  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.651207  203173 retry.go:31] will retry after 207.834299ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.698512  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0307 22:33:45.698536  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0307 22:33:45.702451  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.702483  203173 retry.go:31] will retry after 239.305637ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:45.702543  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.702554  203173 retry.go:31] will retry after 285.084592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.727592  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0307 22:33:45.727618  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0307 22:33:45.746591  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0307 22:33:45.746617  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0307 22:33:45.765445  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0307 22:33:45.765472  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0307 22:33:45.784173  203173 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:33:45.784204  203173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0307 22:33:45.801347  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:33:45.859481  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 22:33:45.892037  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.892066  203173 retry.go:31] will retry after 306.888809ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:45.940958  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.940989  203173 retry.go:31] will retry after 279.560793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:45.942047  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:33:45.988311  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 22:33:46.018101  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.018133  203173 retry.go:31] will retry after 244.794844ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:46.071189  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.071222  203173 retry.go:31] will retry after 273.270477ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.199525  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:33:46.220976  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:33:46.263261  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 22:33:46.292179  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.292213  203173 retry.go:31] will retry after 462.749774ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:46.330178  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.330214  203173 retry.go:31] will retry after 659.088346ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.345501  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 22:33:46.373773  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.373805  203173 retry.go:31] will retry after 517.989931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:46.422541  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.422583  203173 retry.go:31] will retry after 473.924423ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.755639  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 22:33:46.825405  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.825441  203173 retry.go:31] will retry after 778.977627ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.892734  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:33:46.897076  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 22:33:46.985660  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.985698  203173 retry.go:31] will retry after 450.201187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.989997  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 22:33:46.996936  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:46.996970  203173 retry.go:31] will retry after 774.656034ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.063895  203173 node_ready.go:53] error getting node "old-k8s-version-497253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-497253": dial tcp 192.168.76.2:8443: connect: connection refused
	W0307 22:33:47.066480  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.066514  203173 retry.go:31] will retry after 446.708587ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.436978  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 22:33:47.503090  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.503120  203173 retry.go:31] will retry after 962.275792ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.514296  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 22:33:47.590713  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.590748  203173 retry.go:31] will retry after 1.007436906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.605008  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 22:33:47.689105  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.689140  203173 retry.go:31] will retry after 977.055502ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.772328  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 22:33:47.849589  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:47.849626  203173 retry.go:31] will retry after 903.780241ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.465601  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 22:33:48.544093  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.544128  203173 retry.go:31] will retry after 1.405018754s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.598985  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:33:48.666353  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 22:33:48.676135  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.676166  203173 retry.go:31] will retry after 1.040011882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:48.742582  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.742619  203173 retry.go:31] will retry after 1.848956226s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.753817  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0307 22:33:48.826415  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:48.826453  203173 retry.go:31] will retry after 1.750209019s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:49.064137  203173 node_ready.go:53] error getting node "old-k8s-version-497253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-497253": dial tcp 192.168.76.2:8443: connect: connection refused
	I0307 22:33:49.716389  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 22:33:49.783902  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:49.783932  203173 retry.go:31] will retry after 1.502014512s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:49.950288  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 22:33:50.023699  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:50.023736  203173 retry.go:31] will retry after 2.484788303s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:50.577426  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:33:50.591723  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 22:33:50.677623  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:50.677659  203173 retry.go:31] will retry after 1.750579278s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:50.699830  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:50.699864  203173 retry.go:31] will retry after 2.494936803s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:51.286811  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0307 22:33:51.356413  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:51.356444  203173 retry.go:31] will retry after 3.26534739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:51.564065  203173 node_ready.go:53] error getting node "old-k8s-version-497253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-497253": dial tcp 192.168.76.2:8443: connect: connection refused
	I0307 22:33:52.428471  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:33:52.508694  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0307 22:33:52.630273  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:52.630302  203173 retry.go:31] will retry after 2.206892838s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0307 22:33:52.747835  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:52.747864  203173 retry.go:31] will retry after 6.342468947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:53.194938  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0307 22:33:53.293108  203173 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:53.293144  203173 retry.go:31] will retry after 2.780395788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0307 22:33:54.064180  203173 node_ready.go:53] error getting node "old-k8s-version-497253": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-497253": dial tcp 192.168.76.2:8443: connect: connection refused
	I0307 22:33:54.622596  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:33:54.838036  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:33:56.074467  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:33:59.091476  203173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:34:05.042942  203173 node_ready.go:49] node "old-k8s-version-497253" has status "Ready":"True"
	I0307 22:34:05.042965  203173 node_ready.go:38] duration metric: took 19.980092742s for node "old-k8s-version-497253" to be "Ready" ...
	I0307 22:34:05.042975  203173 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:34:05.229670  203173 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-jjsjg" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:05.586751  203173 pod_ready.go:92] pod "coredns-74ff55c5b-jjsjg" in "kube-system" namespace has status "Ready":"True"
	I0307 22:34:05.586827  203173 pod_ready.go:81] duration metric: took 357.077337ms for pod "coredns-74ff55c5b-jjsjg" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:05.586856  203173 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:05.737035  203173 pod_ready.go:92] pod "etcd-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:34:05.737105  203173 pod_ready.go:81] duration metric: took 150.229235ms for pod "etcd-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:05.737140  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:07.772384  203173 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:08.830746  203173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (14.208115868s)
	I0307 22:34:08.919288  203173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (14.0812099s)
	I0307 22:34:08.919327  203173 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-497253"
	I0307 22:34:08.919425  203173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.844926758s)
	I0307 22:34:08.922187  203173 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-497253 addons enable metrics-server
	
	I0307 22:34:08.919679  203173 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.828175238s)
	I0307 22:34:08.939675  203173 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0307 22:34:08.941812  203173 addons.go:505] duration metric: took 24.094540566s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0307 22:34:10.243149  203173 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:12.243533  203173 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:14.243184  203173 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:34:14.243209  203173 pod_ready.go:81] duration metric: took 8.5060487s for pod "kube-apiserver-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:14.243220  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:34:16.250723  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:18.749639  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:20.749691  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:22.750761  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:24.796046  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:27.250622  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:29.251886  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:31.253634  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:33.750324  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:36.249866  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:38.249956  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:40.250636  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:42.250914  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:44.750128  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:47.249265  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:49.250180  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:51.750157  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:53.753267  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:56.249875  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:34:58.749926  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:01.250674  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:03.749214  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:05.750941  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:08.249026  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:10.251021  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:12.769053  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:15.253885  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:17.751535  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:20.250616  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:22.757325  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:25.278867  203173 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.278895  203173 pod_ready.go:81] duration metric: took 1m11.035666367s for pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.278908  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s7l5" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.284398  203173 pod_ready.go:92] pod "kube-proxy-8s7l5" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.284421  203173 pod_ready.go:81] duration metric: took 5.505752ms for pod "kube-proxy-8s7l5" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.284432  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.289813  203173 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.289838  203173 pod_ready.go:81] duration metric: took 5.397953ms for pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.289850  203173 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:27.296486  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:29.297074  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:31.796091  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:33.797140  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:35.801061  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:38.297272  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:40.796600  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:42.797115  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:45.298847  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:47.796663  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:50.296374  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:52.796187  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:54.796599  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:56.798445  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:59.295821  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:01.296396  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:03.796625  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:06.296444  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:08.297407  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:10.796602  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:13.295686  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:15.296045  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:17.301566  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:19.796306  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:22.296441  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:24.796739  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:27.297453  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:29.300605  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:31.796632  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:34.296019  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:36.296825  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:38.796557  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:40.797121  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:43.296360  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:45.298301  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:47.796686  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:50.295837  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:52.296530  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:54.297148  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:56.796796  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:58.796999  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:01.296221  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:03.296654  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:05.297692  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:07.797249  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:10.296763  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:12.797250  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:14.798715  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:17.296697  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:19.297631  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:21.796680  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:24.296142  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:26.326766  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:28.795966  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:30.796372  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:33.296568  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:35.296723  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:37.796345  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:40.295858  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:42.297400  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:44.796351  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:46.796401  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:48.797340  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:51.296470  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:53.296690  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:55.298602  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:57.796372  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:59.796520  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:02.296249  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:04.796146  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:06.796971  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:09.296375  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:11.795721  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:13.801706  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:16.296885  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:18.298497  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:20.796022  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:22.797461  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:25.296255  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:27.297096  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:29.796172  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:32.295946  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:34.795870  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:36.796339  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:38.796554  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:41.297458  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:43.795600  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:45.797095  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:47.802184  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:50.296669  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:52.795972  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:55.296865  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:57.795956  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:00.301337  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:02.796169  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:05.295485  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:07.296560  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:09.796355  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:12.295856  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:14.296046  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:16.796231  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:18.796994  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:20.797675  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:23.296761  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:25.296213  203173 pod_ready.go:81] duration metric: took 4m0.00634836s for pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace to be "Ready" ...
	E0307 22:39:25.296241  203173 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 22:39:25.296251  203173 pod_ready.go:38] duration metric: took 5m20.253260881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:39:25.296265  203173 api_server.go:52] waiting for apiserver process to appear ...
	I0307 22:39:25.296329  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:25.296395  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:25.336345  203173 cri.go:89] found id: "8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:25.336366  203173 cri.go:89] found id: "cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:25.336371  203173 cri.go:89] found id: ""
	I0307 22:39:25.336378  203173 logs.go:276] 2 containers: [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127]
	I0307 22:39:25.336438  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.340420  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.343703  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:25.343781  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:25.383563  203173 cri.go:89] found id: "14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:25.383585  203173 cri.go:89] found id: "dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:25.383596  203173 cri.go:89] found id: ""
	I0307 22:39:25.383604  203173 logs.go:276] 2 containers: [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c]
	I0307 22:39:25.383675  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.387470  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.390779  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:25.390841  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:25.437462  203173 cri.go:89] found id: "413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:25.437481  203173 cri.go:89] found id: "a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:25.437486  203173 cri.go:89] found id: ""
	I0307 22:39:25.437493  203173 logs.go:276] 2 containers: [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477]
	I0307 22:39:25.437547  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.441300  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.444236  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:25.444346  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:25.486461  203173 cri.go:89] found id: "157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:25.486519  203173 cri.go:89] found id: "2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:25.486538  203173 cri.go:89] found id: ""
	I0307 22:39:25.486552  203173 logs.go:276] 2 containers: [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc]
	I0307 22:39:25.486610  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.490758  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.494081  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:25.494191  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:25.533954  203173 cri.go:89] found id: "7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:25.534016  203173 cri.go:89] found id: "f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:25.534046  203173 cri.go:89] found id: ""
	I0307 22:39:25.534060  203173 logs.go:276] 2 containers: [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036]
	I0307 22:39:25.534121  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.537826  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.542894  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:25.543014  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:25.585812  203173 cri.go:89] found id: "9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:25.585836  203173 cri.go:89] found id: "bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:25.585842  203173 cri.go:89] found id: ""
	I0307 22:39:25.585855  203173 logs.go:276] 2 containers: [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1]
	I0307 22:39:25.585922  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.589442  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.592832  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:25.592923  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:25.636509  203173 cri.go:89] found id: "2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:25.636533  203173 cri.go:89] found id: "e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:25.636538  203173 cri.go:89] found id: ""
	I0307 22:39:25.636545  203173 logs.go:276] 2 containers: [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062]
	I0307 22:39:25.636619  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.640672  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.644632  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:25.644725  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:25.685799  203173 cri.go:89] found id: "778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:25.685822  203173 cri.go:89] found id: ""
	I0307 22:39:25.685830  203173 logs.go:276] 1 containers: [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86]
	I0307 22:39:25.685911  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.689934  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:25.690008  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:25.727726  203173 cri.go:89] found id: "426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:25.727753  203173 cri.go:89] found id: "f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:25.727758  203173 cri.go:89] found id: ""
	I0307 22:39:25.727766  203173 logs.go:276] 2 containers: [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906]
	I0307 22:39:25.727867  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.731423  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.735053  203173 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:25.735120  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:25.796794  203173 logs.go:123] Gathering logs for etcd [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d] ...
	I0307 22:39:25.796828  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:25.841590  203173 logs.go:123] Gathering logs for kube-scheduler [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67] ...
	I0307 22:39:25.841620  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:25.886268  203173 logs.go:123] Gathering logs for kube-proxy [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a] ...
	I0307 22:39:25.886343  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:25.924898  203173 logs.go:123] Gathering logs for kube-scheduler [2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc] ...
	I0307 22:39:25.924972  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:25.970416  203173 logs.go:123] Gathering logs for kindnet [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6] ...
	I0307 22:39:25.970447  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:26.015966  203173 logs.go:123] Gathering logs for kindnet [e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062] ...
	I0307 22:39:26.015998  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:26.070164  203173 logs.go:123] Gathering logs for kubernetes-dashboard [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86] ...
	I0307 22:39:26.070190  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:26.111136  203173 logs.go:123] Gathering logs for container status ...
	I0307 22:39:26.111163  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:26.163682  203173 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:26.163711  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:26.181702  203173 logs.go:123] Gathering logs for kube-apiserver [cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127] ...
	I0307 22:39:26.181732  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:26.246451  203173 logs.go:123] Gathering logs for coredns [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623] ...
	I0307 22:39:26.246483  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:26.295840  203173 logs.go:123] Gathering logs for kube-proxy [f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036] ...
	I0307 22:39:26.295868  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:26.341619  203173 logs.go:123] Gathering logs for kube-controller-manager [bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1] ...
	I0307 22:39:26.341646  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:26.408963  203173 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:26.408996  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 22:39:26.472507  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:05 old-k8s-version-497253 kubelet[660]: E0307 22:34:05.261951     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghqhd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghqhd" is forbidden: User "system:node:old-k8s-version-497253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-497253' and this object
	W0307 22:39:26.479496  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.162816     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.479691  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.277655     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.482575  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:22 old-k8s-version-497253 kubelet[660]: E0307 22:34:22.936411     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.484741  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:36 old-k8s-version-497253 kubelet[660]: E0307 22:34:36.920792     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.485205  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:37 old-k8s-version-497253 kubelet[660]: E0307 22:34:37.462923     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.485558  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:38 old-k8s-version-497253 kubelet[660]: E0307 22:34:38.465991     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.486058  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:39 old-k8s-version-497253 kubelet[660]: E0307 22:34:39.469998     660 pod_workers.go:191] Error syncing pod f477a573-d9fd-4235-84a6-32e52bea48e1 ("storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"
	W0307 22:39:26.486394  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:41 old-k8s-version-497253 kubelet[660]: E0307 22:34:41.449013     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.489348  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:51 old-k8s-version-497253 kubelet[660]: E0307 22:34:51.926704     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.489941  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:54 old-k8s-version-497253 kubelet[660]: E0307 22:34:54.505288     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490272  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:01 old-k8s-version-497253 kubelet[660]: E0307 22:35:01.448994     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490457  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:02 old-k8s-version-497253 kubelet[660]: E0307 22:35:02.915003     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.490785  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:12 old-k8s-version-497253 kubelet[660]: E0307 22:35:12.922852     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490970  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:14 old-k8s-version-497253 kubelet[660]: E0307 22:35:14.914572     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.491560  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:25 old-k8s-version-497253 kubelet[660]: E0307 22:35:25.578672     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.491748  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:26 old-k8s-version-497253 kubelet[660]: E0307 22:35:26.918579     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.492079  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:31 old-k8s-version-497253 kubelet[660]: E0307 22:35:31.448901     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.494548  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:39 old-k8s-version-497253 kubelet[660]: E0307 22:35:39.938727     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.494876  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:46 old-k8s-version-497253 kubelet[660]: E0307 22:35:46.914538     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.495064  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:53 old-k8s-version-497253 kubelet[660]: E0307 22:35:53.914358     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.495392  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:00 old-k8s-version-497253 kubelet[660]: E0307 22:36:00.917774     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.495577  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:05 old-k8s-version-497253 kubelet[660]: E0307 22:36:05.914412     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.496168  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:15 old-k8s-version-497253 kubelet[660]: E0307 22:36:15.682987     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.496383  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:18 old-k8s-version-497253 kubelet[660]: E0307 22:36:18.914367     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.496715  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:21 old-k8s-version-497253 kubelet[660]: E0307 22:36:21.449525     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.496902  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:31 old-k8s-version-497253 kubelet[660]: E0307 22:36:31.914470     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.497228  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:33 old-k8s-version-497253 kubelet[660]: E0307 22:36:33.914221     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.497446  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:42 old-k8s-version-497253 kubelet[660]: E0307 22:36:42.914696     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.497795  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:45 old-k8s-version-497253 kubelet[660]: E0307 22:36:45.914100     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.497981  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:57 old-k8s-version-497253 kubelet[660]: E0307 22:36:57.914372     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.498309  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:58 old-k8s-version-497253 kubelet[660]: E0307 22:36:58.914093     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.498659  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.914874     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.501139  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.928532     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.501328  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:21 old-k8s-version-497253 kubelet[660]: E0307 22:37:21.914234     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.501658  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:23 old-k8s-version-497253 kubelet[660]: E0307 22:37:23.914014     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.501850  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:33 old-k8s-version-497253 kubelet[660]: E0307 22:37:33.914590     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.502442  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:37 old-k8s-version-497253 kubelet[660]: E0307 22:37:37.858788     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.502771  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:41 old-k8s-version-497253 kubelet[660]: E0307 22:37:41.449456     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.502959  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:46 old-k8s-version-497253 kubelet[660]: E0307 22:37:46.917492     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503286  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:56 old-k8s-version-497253 kubelet[660]: E0307 22:37:56.914065     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.503470  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:57 old-k8s-version-497253 kubelet[660]: E0307 22:37:57.915061     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503655  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:09 old-k8s-version-497253 kubelet[660]: E0307 22:38:09.914552     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503988  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: E0307 22:38:11.913955     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.504175  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:23 old-k8s-version-497253 kubelet[660]: E0307 22:38:23.914295     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.504556  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: E0307 22:38:26.914365     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.504744  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:35 old-k8s-version-497253 kubelet[660]: E0307 22:38:35.914352     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.505086  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: E0307 22:38:38.914978     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.505272  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:48 old-k8s-version-497253 kubelet[660]: E0307 22:38:48.914308     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.505628  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.505961  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.506151  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.506484  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.506668  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:26.506680  203173 logs.go:123] Gathering logs for kube-apiserver [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29] ...
	I0307 22:39:26.506725  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:26.567581  203173 logs.go:123] Gathering logs for coredns [a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477] ...
	I0307 22:39:26.567616  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:26.617169  203173 logs.go:123] Gathering logs for storage-provisioner [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d] ...
	I0307 22:39:26.617197  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:26.667945  203173 logs.go:123] Gathering logs for storage-provisioner [f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906] ...
	I0307 22:39:26.667973  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:26.707720  203173 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:26.707752  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:26.855434  203173 logs.go:123] Gathering logs for etcd [dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c] ...
	I0307 22:39:26.855466  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:26.907427  203173 logs.go:123] Gathering logs for kube-controller-manager [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc] ...
	I0307 22:39:26.907454  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:26.977630  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:26.977706  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 22:39:26.977771  203173 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 22:39:26.977934  203173 out.go:239]   Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	  Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.977983  203173 out.go:239]   Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	  Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.978041  203173 out.go:239]   Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.978077  203173 out.go:239]   Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	  Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.978110  203173 out.go:239]   Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:26.978155  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:26.978179  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:39:36.979366  203173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:39:36.991646  203173 api_server.go:72] duration metric: took 5m52.144665067s to wait for apiserver process to appear ...
	I0307 22:39:36.991674  203173 api_server.go:88] waiting for apiserver healthz status ...
	I0307 22:39:36.991726  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:36.991797  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:37.043778  203173 cri.go:89] found id: "8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:37.043817  203173 cri.go:89] found id: "cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:37.043823  203173 cri.go:89] found id: ""
	I0307 22:39:37.043831  203173 logs.go:276] 2 containers: [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127]
	I0307 22:39:37.043902  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.049541  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.054684  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:37.054809  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:37.100838  203173 cri.go:89] found id: "14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:37.100860  203173 cri.go:89] found id: "dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:37.100865  203173 cri.go:89] found id: ""
	I0307 22:39:37.100873  203173 logs.go:276] 2 containers: [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c]
	I0307 22:39:37.100932  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.104646  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.108148  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:37.108223  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:37.151349  203173 cri.go:89] found id: "413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:37.151380  203173 cri.go:89] found id: "a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:37.151386  203173 cri.go:89] found id: ""
	I0307 22:39:37.151393  203173 logs.go:276] 2 containers: [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477]
	I0307 22:39:37.151449  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.155102  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.158773  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:37.158875  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:37.197623  203173 cri.go:89] found id: "157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:37.197647  203173 cri.go:89] found id: "2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:37.197652  203173 cri.go:89] found id: ""
	I0307 22:39:37.197659  203173 logs.go:276] 2 containers: [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc]
	I0307 22:39:37.197711  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.201264  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.204706  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:37.204782  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:37.244484  203173 cri.go:89] found id: "7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:37.244508  203173 cri.go:89] found id: "f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:37.244513  203173 cri.go:89] found id: ""
	I0307 22:39:37.244520  203173 logs.go:276] 2 containers: [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036]
	I0307 22:39:37.244580  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.248425  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.252613  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:37.252752  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:37.296151  203173 cri.go:89] found id: "9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:37.296177  203173 cri.go:89] found id: "bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:37.296182  203173 cri.go:89] found id: ""
	I0307 22:39:37.296189  203173 logs.go:276] 2 containers: [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1]
	I0307 22:39:37.296344  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.300120  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.303560  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:37.303657  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:37.341312  203173 cri.go:89] found id: "2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:37.341335  203173 cri.go:89] found id: "e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:37.341341  203173 cri.go:89] found id: ""
	I0307 22:39:37.341348  203173 logs.go:276] 2 containers: [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062]
	I0307 22:39:37.341404  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.344938  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.348625  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:37.348737  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:37.386014  203173 cri.go:89] found id: "426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:37.386037  203173 cri.go:89] found id: "f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:37.386042  203173 cri.go:89] found id: ""
	I0307 22:39:37.386049  203173 logs.go:276] 2 containers: [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906]
	I0307 22:39:37.386141  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.390014  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.393469  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:37.393596  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:37.459166  203173 cri.go:89] found id: "778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:37.459191  203173 cri.go:89] found id: ""
	I0307 22:39:37.459199  203173 logs.go:276] 1 containers: [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86]
	I0307 22:39:37.459285  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.463408  203173 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:37.463433  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:37.633708  203173 logs.go:123] Gathering logs for kube-apiserver [cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127] ...
	I0307 22:39:37.633777  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:37.699015  203173 logs.go:123] Gathering logs for etcd [dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c] ...
	I0307 22:39:37.699048  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:37.740792  203173 logs.go:123] Gathering logs for kube-scheduler [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67] ...
	I0307 22:39:37.740824  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:37.793158  203173 logs.go:123] Gathering logs for kindnet [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6] ...
	I0307 22:39:37.793186  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:37.845822  203173 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:37.845849  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 22:39:37.901312  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:05 old-k8s-version-497253 kubelet[660]: E0307 22:34:05.261951     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghqhd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghqhd" is forbidden: User "system:node:old-k8s-version-497253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-497253' and this object
	W0307 22:39:37.908351  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.162816     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.908549  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.277655     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.911329  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:22 old-k8s-version-497253 kubelet[660]: E0307 22:34:22.936411     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.913515  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:36 old-k8s-version-497253 kubelet[660]: E0307 22:34:36.920792     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.913980  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:37 old-k8s-version-497253 kubelet[660]: E0307 22:34:37.462923     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.914311  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:38 old-k8s-version-497253 kubelet[660]: E0307 22:34:38.465991     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.914752  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:39 old-k8s-version-497253 kubelet[660]: E0307 22:34:39.469998     660 pod_workers.go:191] Error syncing pod f477a573-d9fd-4235-84a6-32e52bea48e1 ("storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"
	W0307 22:39:37.915080  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:41 old-k8s-version-497253 kubelet[660]: E0307 22:34:41.449013     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.918016  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:51 old-k8s-version-497253 kubelet[660]: E0307 22:34:51.926704     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.918607  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:54 old-k8s-version-497253 kubelet[660]: E0307 22:34:54.505288     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.918933  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:01 old-k8s-version-497253 kubelet[660]: E0307 22:35:01.448994     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.919119  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:02 old-k8s-version-497253 kubelet[660]: E0307 22:35:02.915003     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.919448  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:12 old-k8s-version-497253 kubelet[660]: E0307 22:35:12.922852     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.919635  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:14 old-k8s-version-497253 kubelet[660]: E0307 22:35:14.914572     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.920224  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:25 old-k8s-version-497253 kubelet[660]: E0307 22:35:25.578672     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.920418  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:26 old-k8s-version-497253 kubelet[660]: E0307 22:35:26.918579     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.920747  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:31 old-k8s-version-497253 kubelet[660]: E0307 22:35:31.448901     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.923205  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:39 old-k8s-version-497253 kubelet[660]: E0307 22:35:39.938727     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.923532  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:46 old-k8s-version-497253 kubelet[660]: E0307 22:35:46.914538     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.923717  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:53 old-k8s-version-497253 kubelet[660]: E0307 22:35:53.914358     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.924047  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:00 old-k8s-version-497253 kubelet[660]: E0307 22:36:00.917774     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.924231  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:05 old-k8s-version-497253 kubelet[660]: E0307 22:36:05.914412     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.924825  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:15 old-k8s-version-497253 kubelet[660]: E0307 22:36:15.682987     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.925010  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:18 old-k8s-version-497253 kubelet[660]: E0307 22:36:18.914367     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.925336  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:21 old-k8s-version-497253 kubelet[660]: E0307 22:36:21.449525     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.925522  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:31 old-k8s-version-497253 kubelet[660]: E0307 22:36:31.914470     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.925856  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:33 old-k8s-version-497253 kubelet[660]: E0307 22:36:33.914221     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.926040  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:42 old-k8s-version-497253 kubelet[660]: E0307 22:36:42.914696     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.926366  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:45 old-k8s-version-497253 kubelet[660]: E0307 22:36:45.914100     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.926550  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:57 old-k8s-version-497253 kubelet[660]: E0307 22:36:57.914372     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.926879  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:58 old-k8s-version-497253 kubelet[660]: E0307 22:36:58.914093     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.927206  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.914874     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.929692  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.928532     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.929880  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:21 old-k8s-version-497253 kubelet[660]: E0307 22:37:21.914234     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.930207  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:23 old-k8s-version-497253 kubelet[660]: E0307 22:37:23.914014     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.930390  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:33 old-k8s-version-497253 kubelet[660]: E0307 22:37:33.914590     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.930976  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:37 old-k8s-version-497253 kubelet[660]: E0307 22:37:37.858788     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.931303  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:41 old-k8s-version-497253 kubelet[660]: E0307 22:37:41.449456     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.931488  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:46 old-k8s-version-497253 kubelet[660]: E0307 22:37:46.917492     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.931817  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:56 old-k8s-version-497253 kubelet[660]: E0307 22:37:56.914065     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.932001  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:57 old-k8s-version-497253 kubelet[660]: E0307 22:37:57.915061     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.932185  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:09 old-k8s-version-497253 kubelet[660]: E0307 22:38:09.914552     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.932518  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: E0307 22:38:11.913955     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.932704  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:23 old-k8s-version-497253 kubelet[660]: E0307 22:38:23.914295     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.933032  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: E0307 22:38:26.914365     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.933216  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:35 old-k8s-version-497253 kubelet[660]: E0307 22:38:35.914352     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.933542  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: E0307 22:38:38.914978     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.933726  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:48 old-k8s-version-497253 kubelet[660]: E0307 22:38:48.914308     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.934052  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.934378  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.934564  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.934899  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.935085  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.935411  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.935596  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:37.935605  203173 logs.go:123] Gathering logs for etcd [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d] ...
	I0307 22:39:37.935619  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:37.997943  203173 logs.go:123] Gathering logs for kube-controller-manager [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc] ...
	I0307 22:39:37.997972  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:38.115147  203173 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:38.115183  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:38.133346  203173 logs.go:123] Gathering logs for coredns [a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477] ...
	I0307 22:39:38.133379  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:38.172233  203173 logs.go:123] Gathering logs for kube-scheduler [2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc] ...
	I0307 22:39:38.172264  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:38.215015  203173 logs.go:123] Gathering logs for kube-proxy [f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036] ...
	I0307 22:39:38.215047  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:38.253945  203173 logs.go:123] Gathering logs for kindnet [e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062] ...
	I0307 22:39:38.253974  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:38.298086  203173 logs.go:123] Gathering logs for storage-provisioner [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d] ...
	I0307 22:39:38.298117  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:38.343935  203173 logs.go:123] Gathering logs for coredns [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623] ...
	I0307 22:39:38.343969  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:38.382349  203173 logs.go:123] Gathering logs for kube-proxy [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a] ...
	I0307 22:39:38.382378  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:38.427494  203173 logs.go:123] Gathering logs for kube-controller-manager [bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1] ...
	I0307 22:39:38.427570  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:38.491465  203173 logs.go:123] Gathering logs for storage-provisioner [f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906] ...
	I0307 22:39:38.491536  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:38.532531  203173 logs.go:123] Gathering logs for kubernetes-dashboard [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86] ...
	I0307 22:39:38.532564  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:38.577829  203173 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:38.577862  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:38.638480  203173 logs.go:123] Gathering logs for container status ...
	I0307 22:39:38.638517  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:38.693638  203173 logs.go:123] Gathering logs for kube-apiserver [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29] ...
	I0307 22:39:38.693667  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:38.764704  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:38.764734  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 22:39:38.764785  203173 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0307 22:39:38.764795  203173 out.go:239]   Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:38.764803  203173 out.go:239]   Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	  Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:38.764817  203173 out.go:239]   Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:38.764824  203173 out.go:239]   Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	  Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:38.764836  203173 out.go:239]   Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:38.764843  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:38.764849  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:39:48.764963  203173 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0307 22:39:48.777388  203173 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0307 22:39:48.794117  203173 out.go:177] 
	W0307 22:39:48.797346  203173 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 22:39:48.797390  203173 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 22:39:48.797410  203173 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 22:39:48.797416  203173 out.go:239] * 
	* 
	W0307 22:39:48.798329  203173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 22:39:48.800764  203173 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-497253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-497253
helpers_test.go:235: (dbg) docker inspect old-k8s-version-497253:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274",
	        "Created": "2024-03-07T22:30:54.402543856Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203405,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-07T22:33:38.227072028Z",
	            "FinishedAt": "2024-03-07T22:33:36.802787321Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274/hostname",
	        "HostsPath": "/var/lib/docker/containers/5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274/hosts",
	        "LogPath": "/var/lib/docker/containers/5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274/5da5b5e5895e5a152492eb1a928e7582d7738546ef3f1f494fc581b79848d274-json.log",
	        "Name": "/old-k8s-version-497253",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-497253:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-497253",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3d80f0bc4a114d205a8392711721096edd71a9ea3631b4a85972fae12eda5537-init/diff:/var/lib/docker/overlay2/6822645c415ab3e3451f0dc6746bf9aea38c91b1070d7030c1ba88a1ef7f69e0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3d80f0bc4a114d205a8392711721096edd71a9ea3631b4a85972fae12eda5537/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3d80f0bc4a114d205a8392711721096edd71a9ea3631b4a85972fae12eda5537/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3d80f0bc4a114d205a8392711721096edd71a9ea3631b4a85972fae12eda5537/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-497253",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-497253/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-497253",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-497253",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-497253",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8aab7a20fbb3828faae347e3a6fd0a027bf00f7d0b071089583e4156f4e1f8b8",
	            "SandboxKey": "/var/run/docker/netns/8aab7a20fbb3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-497253": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5da5b5e5895e",
	                        "old-k8s-version-497253"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "c4f54b074df5ed656acbbc9c635d1f131cdf40a8a7fc4f5257a369f345bcfbbc",
	                    "EndpointID": "f7949d76ead884877a69d20b356eced6aa0ce76fbbbaba509c9bed393b82f878",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-497253",
	                        "5da5b5e5895e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497253 -n old-k8s-version-497253
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-497253 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-497253 logs -n 25: (2.609039628s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-026071                           | force-systemd-flag-026071 | jenkins | v1.32.0 | 07 Mar 24 22:29 UTC | 07 Mar 24 22:29 UTC |
	| start   | -p cert-expiration-193013                              | cert-expiration-193013    | jenkins | v1.32.0 | 07 Mar 24 22:29 UTC | 07 Mar 24 22:30 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-344179                               | force-systemd-env-344179  | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-344179                            | force-systemd-env-344179  | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	| start   | -p cert-options-781550                                 | cert-options-781550       | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-781550 ssh                                | cert-options-781550       | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-781550 -- sudo                         | cert-options-781550       | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-781550                                 | cert-options-781550       | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:30 UTC |
	| start   | -p old-k8s-version-497253                              | old-k8s-version-497253    | jenkins | v1.32.0 | 07 Mar 24 22:30 UTC | 07 Mar 24 22:33 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-193013                              | cert-expiration-193013    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:33 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-193013                              | cert-expiration-193013    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:33 UTC |
	| addons  | enable metrics-server -p old-k8s-version-497253        | old-k8s-version-497253    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| start   | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:34 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-497253                              | old-k8s-version-497253    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:33 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-497253             | old-k8s-version-497253    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC | 07 Mar 24 22:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-497253                              | old-k8s-version-497253    | jenkins | v1.32.0 | 07 Mar 24 22:33 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-767597             | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:34 UTC | 07 Mar 24 22:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:34 UTC | 07 Mar 24 22:35 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-767597                  | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:35 UTC | 07 Mar 24 22:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:35 UTC | 07 Mar 24 22:39 UTC |
	|         | --memory=2200 --alsologtostderr                        |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                           |         |         |                     |                     |
	| image   | no-preload-767597 image list                           | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:39 UTC | 07 Mar 24 22:39 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:39 UTC | 07 Mar 24 22:39 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:39 UTC | 07 Mar 24 22:39 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:39 UTC | 07 Mar 24 22:39 UTC |
	| delete  | -p no-preload-767597                                   | no-preload-767597         | jenkins | v1.32.0 | 07 Mar 24 22:39 UTC |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 22:35:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 22:35:05.539670  208512 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:35:05.539864  208512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:35:05.539892  208512 out.go:304] Setting ErrFile to fd 2...
	I0307 22:35:05.539915  208512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:35:05.540167  208512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:35:05.540655  208512 out.go:298] Setting JSON to false
	I0307 22:35:05.541749  208512 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4649,"bootTime":1709846257,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 22:35:05.541859  208512 start.go:139] virtualization:  
	I0307 22:35:05.544310  208512 out.go:177] * [no-preload-767597] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 22:35:05.546633  208512 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 22:35:05.546729  208512 notify.go:220] Checking for updates...
	I0307 22:35:05.551337  208512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 22:35:05.553610  208512 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:35:05.555557  208512 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 22:35:05.557498  208512 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 22:35:05.560813  208512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 22:35:05.563424  208512 config.go:182] Loaded profile config "no-preload-767597": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0307 22:35:05.564012  208512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:35:05.586091  208512 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 22:35:05.586208  208512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:35:05.656062  208512 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 22:35:05.646333607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:35:05.656173  208512 docker.go:295] overlay module found
	I0307 22:35:05.659498  208512 out.go:177] * Using the docker driver based on existing profile
	I0307 22:35:05.661127  208512 start.go:297] selected driver: docker
	I0307 22:35:05.661144  208512 start.go:901] validating driver "docker" against &{Name:no-preload-767597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-767597 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:35:05.661271  208512 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 22:35:05.661909  208512 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:35:05.720114  208512 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 22:35:05.710233866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:35:05.720511  208512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 22:35:05.720541  208512 cni.go:84] Creating CNI manager for ""
	I0307 22:35:05.720556  208512 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 22:35:05.720610  208512 start.go:340] cluster config:
	{Name:no-preload-767597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-767597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:35:05.723958  208512 out.go:177] * Starting "no-preload-767597" primary control-plane node in "no-preload-767597" cluster
	I0307 22:35:05.725666  208512 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 22:35:05.727457  208512 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0307 22:35:05.729212  208512 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 22:35:05.729288  208512 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 22:35:05.729362  208512 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/config.json ...
	I0307 22:35:05.729695  208512 cache.go:107] acquiring lock: {Name:mka6c3327e8416acbc1ddfc53ded2e1c0e027796 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.729785  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0307 22:35:05.729803  208512 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 108.134µs
	I0307 22:35:05.729812  208512 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0307 22:35:05.729824  208512 cache.go:107] acquiring lock: {Name:mkc926160654c366d4e22891ccf53de9d40179db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.729866  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0307 22:35:05.729876  208512 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 53.793µs
	I0307 22:35:05.729883  208512 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0307 22:35:05.729894  208512 cache.go:107] acquiring lock: {Name:mk74108e32ca4ff12332d8ace0a83c3d5ab29616 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.729925  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0307 22:35:05.729930  208512 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 39.368µs
	I0307 22:35:05.729938  208512 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0307 22:35:05.729948  208512 cache.go:107] acquiring lock: {Name:mke8b59caa1366e1b9f61a022166908cffd51832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.729974  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0307 22:35:05.729979  208512 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 32.041µs
	I0307 22:35:05.729985  208512 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0307 22:35:05.729970  208512 cache.go:107] acquiring lock: {Name:mkbbedbc94139b78cda59fb462df0ab035a59e23 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.729997  208512 cache.go:107] acquiring lock: {Name:mk9c211a81a668d6ffad9455ca24af36bab0332d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.730037  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0307 22:35:05.730045  208512 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 81.534µs
	I0307 22:35:05.730049  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0307 22:35:05.730052  208512 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0307 22:35:05.730056  208512 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 60.086µs
	I0307 22:35:05.730063  208512 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0307 22:35:05.730064  208512 cache.go:107] acquiring lock: {Name:mk6973bd0b593c335863a55401fb275d55bac78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.730073  208512 cache.go:107] acquiring lock: {Name:mk94425fd57864a38c1451110d6dd403340ba31f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.730092  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0307 22:35:05.730098  208512 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 35.955µs
	I0307 22:35:05.730104  208512 cache.go:115] /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0307 22:35:05.730110  208512 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 38.909µs
	I0307 22:35:05.730118  208512 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0307 22:35:05.730104  208512 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0307 22:35:05.730132  208512 cache.go:87] Successfully saved all images to host disk.
	I0307 22:35:05.750647  208512 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0307 22:35:05.750688  208512 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0307 22:35:05.750703  208512 cache.go:194] Successfully downloaded all kic artifacts
	I0307 22:35:05.750729  208512 start.go:360] acquireMachinesLock for no-preload-767597: {Name:mk1b21576e73b82e91984738a4b5e9ee433ab7ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0307 22:35:05.750790  208512 start.go:364] duration metric: took 44.726µs to acquireMachinesLock for "no-preload-767597"
	I0307 22:35:05.750810  208512 start.go:96] Skipping create...Using existing machine configuration
	I0307 22:35:05.750816  208512 fix.go:54] fixHost starting: 
	I0307 22:35:05.751078  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:05.769629  208512 fix.go:112] recreateIfNeeded on no-preload-767597: state=Stopped err=<nil>
	W0307 22:35:05.769659  208512 fix.go:138] unexpected machine state, will restart: <nil>
	I0307 22:35:05.772086  208512 out.go:177] * Restarting existing docker container for "no-preload-767597" ...
	I0307 22:35:03.749214  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:05.750941  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:05.773718  208512 cli_runner.go:164] Run: docker start no-preload-767597
	I0307 22:35:06.089985  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:06.114482  208512 kic.go:430] container "no-preload-767597" state is running.
	I0307 22:35:06.115357  208512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-767597
	I0307 22:35:06.143354  208512 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/config.json ...
	I0307 22:35:06.143620  208512 machine.go:94] provisionDockerMachine start ...
	I0307 22:35:06.143696  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:06.168346  208512 main.go:141] libmachine: Using SSH client type: native
	I0307 22:35:06.168646  208512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I0307 22:35:06.168661  208512 main.go:141] libmachine: About to run SSH command:
	hostname
	I0307 22:35:06.169319  208512 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47432->127.0.0.1:33072: read: connection reset by peer
	I0307 22:35:09.308053  208512 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-767597
	
	I0307 22:35:09.308079  208512 ubuntu.go:169] provisioning hostname "no-preload-767597"
	I0307 22:35:09.308143  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:09.323243  208512 main.go:141] libmachine: Using SSH client type: native
	I0307 22:35:09.323507  208512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I0307 22:35:09.323523  208512 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-767597 && echo "no-preload-767597" | sudo tee /etc/hostname
	I0307 22:35:09.468491  208512 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-767597
	
	I0307 22:35:09.468584  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:09.486013  208512 main.go:141] libmachine: Using SSH client type: native
	I0307 22:35:09.486262  208512 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1c40] 0x3e44a0 <nil>  [] 0s} 127.0.0.1 33072 <nil> <nil>}
	I0307 22:35:09.486285  208512 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-767597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-767597/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-767597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0307 22:35:09.616385  208512 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0307 22:35:09.616411  208512 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18320-2408/.minikube CaCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18320-2408/.minikube}
	I0307 22:35:09.616451  208512 ubuntu.go:177] setting up certificates
	I0307 22:35:09.616462  208512 provision.go:84] configureAuth start
	I0307 22:35:09.616526  208512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-767597
	I0307 22:35:09.639008  208512 provision.go:143] copyHostCerts
	I0307 22:35:09.639078  208512 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem, removing ...
	I0307 22:35:09.639093  208512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem
	I0307 22:35:09.639167  208512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/ca.pem (1078 bytes)
	I0307 22:35:09.639263  208512 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem, removing ...
	I0307 22:35:09.639268  208512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem
	I0307 22:35:09.639293  208512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/cert.pem (1123 bytes)
	I0307 22:35:09.639349  208512 exec_runner.go:144] found /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem, removing ...
	I0307 22:35:09.639354  208512 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem
	I0307 22:35:09.639383  208512 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18320-2408/.minikube/key.pem (1675 bytes)
	I0307 22:35:09.639432  208512 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem org=jenkins.no-preload-767597 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-767597]
	I0307 22:35:10.676195  208512 provision.go:177] copyRemoteCerts
	I0307 22:35:10.676297  208512 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0307 22:35:10.676344  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:10.694615  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:10.789174  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0307 22:35:10.813943  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0307 22:35:10.839582  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0307 22:35:10.866601  208512 provision.go:87] duration metric: took 1.250117325s to configureAuth
	I0307 22:35:10.866628  208512 ubuntu.go:193] setting minikube options for container-runtime
	I0307 22:35:10.866817  208512 config.go:182] Loaded profile config "no-preload-767597": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0307 22:35:10.866823  208512 machine.go:97] duration metric: took 4.72318698s to provisionDockerMachine
	I0307 22:35:10.866831  208512 start.go:293] postStartSetup for "no-preload-767597" (driver="docker")
	I0307 22:35:10.866842  208512 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0307 22:35:10.866893  208512 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0307 22:35:10.866930  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:10.884814  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:10.977196  208512 ssh_runner.go:195] Run: cat /etc/os-release
	I0307 22:35:10.980159  208512 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0307 22:35:10.980194  208512 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0307 22:35:10.980226  208512 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0307 22:35:10.980235  208512 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0307 22:35:10.980245  208512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/addons for local assets ...
	I0307 22:35:10.980339  208512 filesync.go:126] Scanning /home/jenkins/minikube-integration/18320-2408/.minikube/files for local assets ...
	I0307 22:35:10.980443  208512 filesync.go:149] local asset: /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem -> 77642.pem in /etc/ssl/certs
	I0307 22:35:10.980587  208512 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0307 22:35:10.989109  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem --> /etc/ssl/certs/77642.pem (1708 bytes)
	I0307 22:35:11.019358  208512 start.go:296] duration metric: took 152.512247ms for postStartSetup
	I0307 22:35:11.019444  208512 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:35:11.019495  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:11.034983  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:11.125693  208512 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0307 22:35:11.130283  208512 fix.go:56] duration metric: took 5.379459505s for fixHost
	I0307 22:35:11.130308  208512 start.go:83] releasing machines lock for "no-preload-767597", held for 5.379509145s
	I0307 22:35:11.130380  208512 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-767597
	I0307 22:35:11.145816  208512 ssh_runner.go:195] Run: cat /version.json
	I0307 22:35:11.145872  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:11.146114  208512 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0307 22:35:11.146155  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:11.166469  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:11.167791  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:11.260106  208512 ssh_runner.go:195] Run: systemctl --version
	I0307 22:35:11.384890  208512 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0307 22:35:11.389495  208512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0307 22:35:11.409586  208512 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0307 22:35:11.409693  208512 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0307 22:35:11.418690  208512 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0307 22:35:11.418712  208512 start.go:494] detecting cgroup driver to use...
	I0307 22:35:11.418743  208512 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0307 22:35:11.418791  208512 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0307 22:35:11.433857  208512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0307 22:35:11.446795  208512 docker.go:217] disabling cri-docker service (if available) ...
	I0307 22:35:11.446872  208512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0307 22:35:11.461120  208512 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0307 22:35:11.472245  208512 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0307 22:35:11.564641  208512 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0307 22:35:11.657629  208512 docker.go:233] disabling docker service ...
	I0307 22:35:11.657720  208512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0307 22:35:11.670160  208512 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0307 22:35:11.681746  208512 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0307 22:35:11.774575  208512 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0307 22:35:11.862709  208512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0307 22:35:11.875352  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0307 22:35:11.894457  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0307 22:35:11.905427  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0307 22:35:11.915375  208512 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0307 22:35:11.915500  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0307 22:35:11.925501  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:35:11.935706  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0307 22:35:11.945534  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0307 22:35:11.955134  208512 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0307 22:35:11.965268  208512 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0307 22:35:11.975859  208512 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0307 22:35:11.984932  208512 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0307 22:35:11.993636  208512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:35:12.094898  208512 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0307 22:35:12.253778  208512 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0307 22:35:12.253868  208512 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0307 22:35:12.259413  208512 start.go:562] Will wait 60s for crictl version
	I0307 22:35:12.259506  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:35:12.263301  208512 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0307 22:35:12.308581  208512 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0307 22:35:12.308669  208512 ssh_runner.go:195] Run: containerd --version
	I0307 22:35:12.333234  208512 ssh_runner.go:195] Run: containerd --version
	I0307 22:35:12.367906  208512 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.6.28 ...
	I0307 22:35:08.249026  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:10.251021  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:12.369952  208512 cli_runner.go:164] Run: docker network inspect no-preload-767597 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0307 22:35:12.385903  208512 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0307 22:35:12.389834  208512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:35:12.402296  208512 kubeadm.go:877] updating cluster {Name:no-preload-767597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-767597 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0307 22:35:12.402429  208512 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 22:35:12.402474  208512 ssh_runner.go:195] Run: sudo crictl images --output json
	I0307 22:35:12.438131  208512 containerd.go:612] all images are preloaded for containerd runtime.
	I0307 22:35:12.438155  208512 cache_images.go:84] Images are preloaded, skipping loading
	I0307 22:35:12.438170  208512 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.29.0-rc.2 containerd true true} ...
	I0307 22:35:12.438275  208512 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-767597 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-767597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0307 22:35:12.438339  208512 ssh_runner.go:195] Run: sudo crictl info
	I0307 22:35:12.476805  208512 cni.go:84] Creating CNI manager for ""
	I0307 22:35:12.476826  208512 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 22:35:12.476836  208512 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0307 22:35:12.476858  208512 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-767597 NodeName:no-preload-767597 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0307 22:35:12.476995  208512 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-767597"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0307 22:35:12.477070  208512 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0307 22:35:12.486347  208512 binaries.go:44] Found k8s binaries, skipping transfer
	I0307 22:35:12.486418  208512 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0307 22:35:12.494910  208512 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0307 22:35:12.512714  208512 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0307 22:35:12.532829  208512 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0307 22:35:12.552181  208512 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0307 22:35:12.555719  208512 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0307 22:35:12.566841  208512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:35:12.666052  208512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:35:12.684757  208512 certs.go:68] Setting up /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597 for IP: 192.168.85.2
	I0307 22:35:12.684788  208512 certs.go:194] generating shared ca certs ...
	I0307 22:35:12.684805  208512 certs.go:226] acquiring lock for ca certs: {Name:mk7f303c61c8508a802bee4114a394243ccd109f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:35:12.684937  208512 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key
	I0307 22:35:12.684993  208512 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key
	I0307 22:35:12.685005  208512 certs.go:256] generating profile certs ...
	I0307 22:35:12.685087  208512 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.key
	I0307 22:35:12.685175  208512 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/apiserver.key.8eb0adea
	I0307 22:35:12.685228  208512 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/proxy-client.key
	I0307 22:35:12.685369  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764.pem (1338 bytes)
	W0307 22:35:12.685404  208512 certs.go:480] ignoring /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764_empty.pem, impossibly tiny 0 bytes
	I0307 22:35:12.685416  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca-key.pem (1679 bytes)
	I0307 22:35:12.685444  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/ca.pem (1078 bytes)
	I0307 22:35:12.685478  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/cert.pem (1123 bytes)
	I0307 22:35:12.685503  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/certs/key.pem (1675 bytes)
	I0307 22:35:12.685549  208512 certs.go:484] found cert: /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem (1708 bytes)
	I0307 22:35:12.686235  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0307 22:35:12.713409  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0307 22:35:12.738625  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0307 22:35:12.765287  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0307 22:35:12.803678  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0307 22:35:12.830316  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0307 22:35:12.859080  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0307 22:35:12.887026  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0307 22:35:12.913198  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/ssl/certs/77642.pem --> /usr/share/ca-certificates/77642.pem (1708 bytes)
	I0307 22:35:12.957260  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0307 22:35:12.999129  208512 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18320-2408/.minikube/certs/7764.pem --> /usr/share/ca-certificates/7764.pem (1338 bytes)
	I0307 22:35:13.029895  208512 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0307 22:35:13.050062  208512 ssh_runner.go:195] Run: openssl version
	I0307 22:35:13.058403  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77642.pem && ln -fs /usr/share/ca-certificates/77642.pem /etc/ssl/certs/77642.pem"
	I0307 22:35:13.069362  208512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77642.pem
	I0307 22:35:13.072879  208512 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar  7 21:53 /usr/share/ca-certificates/77642.pem
	I0307 22:35:13.072981  208512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77642.pem
	I0307 22:35:13.079923  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77642.pem /etc/ssl/certs/3ec20f2e.0"
	I0307 22:35:13.088691  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0307 22:35:13.097976  208512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:35:13.101637  208512 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar  7 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:35:13.101699  208512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0307 22:35:13.108560  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0307 22:35:13.117467  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7764.pem && ln -fs /usr/share/ca-certificates/7764.pem /etc/ssl/certs/7764.pem"
	I0307 22:35:13.127102  208512 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7764.pem
	I0307 22:35:13.130554  208512 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar  7 21:53 /usr/share/ca-certificates/7764.pem
	I0307 22:35:13.130621  208512 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7764.pem
	I0307 22:35:13.137968  208512 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7764.pem /etc/ssl/certs/51391683.0"
	I0307 22:35:13.147301  208512 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0307 22:35:13.151104  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0307 22:35:13.158301  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0307 22:35:13.165343  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0307 22:35:13.173370  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0307 22:35:13.180671  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0307 22:35:13.187377  208512 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0307 22:35:13.194422  208512 kubeadm.go:391] StartCluster: {Name:no-preload-767597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-767597 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 22:35:13.194528  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0307 22:35:13.194610  208512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0307 22:35:13.233246  208512 cri.go:89] found id: "cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:35:13.233268  208512 cri.go:89] found id: "988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:35:13.233274  208512 cri.go:89] found id: "8f5bfb546acf680bf07066e7e21f63d7aff174c71c050e14a9bd642cdffe16f7"
	I0307 22:35:13.233277  208512 cri.go:89] found id: "775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:35:13.233282  208512 cri.go:89] found id: "61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:35:13.233287  208512 cri.go:89] found id: "a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:35:13.233290  208512 cri.go:89] found id: "da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:35:13.233293  208512 cri.go:89] found id: "3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:35:13.233324  208512 cri.go:89] found id: ""
	I0307 22:35:13.233396  208512 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0307 22:35:13.249198  208512 cri.go:116] JSON = null
	W0307 22:35:13.249243  208512 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0307 22:35:13.249302  208512 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0307 22:35:13.259242  208512 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0307 22:35:13.259312  208512 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0307 22:35:13.259332  208512 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0307 22:35:13.259398  208512 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0307 22:35:13.268262  208512 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0307 22:35:13.268928  208512 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-767597" does not appear in /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:35:13.269189  208512 kubeconfig.go:62] /home/jenkins/minikube-integration/18320-2408/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-767597" cluster setting kubeconfig missing "no-preload-767597" context setting]
	I0307 22:35:13.269661  208512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:35:13.271015  208512 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0307 22:35:13.282333  208512 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0307 22:35:13.282367  208512 kubeadm.go:591] duration metric: took 23.023091ms to restartPrimaryControlPlane
	I0307 22:35:13.282387  208512 kubeadm.go:393] duration metric: took 87.963131ms to StartCluster
	I0307 22:35:13.282403  208512 settings.go:142] acquiring lock: {Name:mk6b824c86d3c8cffe443e44d2dcdf6ba75674f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:35:13.282459  208512 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:35:13.283397  208512 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/kubeconfig: {Name:mkc7f9d8cfd4e14e150b8fc8a3019ac099191c36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 22:35:13.283604  208512 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0307 22:35:13.286382  208512 out.go:177] * Verifying Kubernetes components...
	I0307 22:35:13.283949  208512 config.go:182] Loaded profile config "no-preload-767597": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0307 22:35:13.283961  208512 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0307 22:35:13.288915  208512 addons.go:69] Setting storage-provisioner=true in profile "no-preload-767597"
	I0307 22:35:13.288938  208512 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0307 22:35:13.288949  208512 addons.go:234] Setting addon storage-provisioner=true in "no-preload-767597"
	W0307 22:35:13.288957  208512 addons.go:243] addon storage-provisioner should already be in state true
	I0307 22:35:13.288992  208512 host.go:66] Checking if "no-preload-767597" exists ...
	I0307 22:35:13.289031  208512 addons.go:69] Setting dashboard=true in profile "no-preload-767597"
	I0307 22:35:13.289048  208512 addons.go:234] Setting addon dashboard=true in "no-preload-767597"
	W0307 22:35:13.289055  208512 addons.go:243] addon dashboard should already be in state true
	I0307 22:35:13.289089  208512 host.go:66] Checking if "no-preload-767597" exists ...
	I0307 22:35:13.289490  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:13.289499  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:13.290004  208512 addons.go:69] Setting metrics-server=true in profile "no-preload-767597"
	I0307 22:35:13.290037  208512 addons.go:234] Setting addon metrics-server=true in "no-preload-767597"
	W0307 22:35:13.290052  208512 addons.go:243] addon metrics-server should already be in state true
	I0307 22:35:13.290082  208512 host.go:66] Checking if "no-preload-767597" exists ...
	I0307 22:35:13.290520  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:13.290916  208512 addons.go:69] Setting default-storageclass=true in profile "no-preload-767597"
	I0307 22:35:13.290945  208512 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-767597"
	I0307 22:35:13.291235  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:13.330022  208512 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0307 22:35:13.336415  208512 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0307 22:35:13.339031  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0307 22:35:13.339055  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0307 22:35:13.339118  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:13.403412  208512 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0307 22:35:13.405061  208512 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:35:13.405086  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0307 22:35:13.405161  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:13.416685  208512 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0307 22:35:13.415112  208512 addons.go:234] Setting addon default-storageclass=true in "no-preload-767597"
	I0307 22:35:13.415294  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:13.419148  208512 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0307 22:35:13.419166  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0307 22:35:13.419219  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	W0307 22:35:13.419420  208512 addons.go:243] addon default-storageclass should already be in state true
	I0307 22:35:13.419451  208512 host.go:66] Checking if "no-preload-767597" exists ...
	I0307 22:35:13.419885  208512 cli_runner.go:164] Run: docker container inspect no-preload-767597 --format={{.State.Status}}
	I0307 22:35:13.460975  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:13.485213  208512 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0307 22:35:13.485233  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0307 22:35:13.485292  208512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-767597
	I0307 22:35:13.492422  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:13.523511  208512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33072 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/no-preload-767597/id_rsa Username:docker}
	I0307 22:35:13.553555  208512 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0307 22:35:13.685269  208512 node_ready.go:35] waiting up to 6m0s for node "no-preload-767597" to be "Ready" ...
	I0307 22:35:13.712641  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0307 22:35:13.712666  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0307 22:35:13.752532  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0307 22:35:13.752558  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0307 22:35:13.789067  208512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0307 22:35:13.801360  208512 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0307 22:35:13.801379  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0307 22:35:13.826420  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0307 22:35:13.826453  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0307 22:35:13.891072  208512 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0307 22:35:13.891110  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0307 22:35:13.965948  208512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0307 22:35:14.017690  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0307 22:35:14.017722  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0307 22:35:14.180530  208512 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:35:14.180556  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0307 22:35:14.290500  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0307 22:35:14.290526  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0307 22:35:14.354148  208512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0307 22:35:14.444806  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0307 22:35:14.444833  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0307 22:35:14.577504  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0307 22:35:14.577532  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0307 22:35:14.629444  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0307 22:35:14.629479  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0307 22:35:14.697692  208512 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:35:14.697718  208512 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0307 22:35:14.785218  208512 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0307 22:35:12.769053  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:15.253885  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:18.211449  208512 node_ready.go:49] node "no-preload-767597" has status "Ready":"True"
	I0307 22:35:18.211491  208512 node_ready.go:38] duration metric: took 4.526184818s for node "no-preload-767597" to be "Ready" ...
	I0307 22:35:18.211503  208512 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:35:18.259359  208512 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-w5rmw" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.289582  208512 pod_ready.go:92] pod "coredns-76f75df574-w5rmw" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.289610  208512 pod_ready.go:81] duration metric: took 30.216736ms for pod "coredns-76f75df574-w5rmw" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.289621  208512 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.304829  208512 pod_ready.go:92] pod "etcd-no-preload-767597" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.304853  208512 pod_ready.go:81] duration metric: took 15.224666ms for pod "etcd-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.304875  208512 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.355922  208512 pod_ready.go:92] pod "kube-apiserver-no-preload-767597" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.355957  208512 pod_ready.go:81] duration metric: took 51.064552ms for pod "kube-apiserver-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.355970  208512 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.378001  208512 pod_ready.go:92] pod "kube-controller-manager-no-preload-767597" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.378032  208512 pod_ready.go:81] duration metric: took 22.054112ms for pod "kube-controller-manager-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.378044  208512 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d69xl" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.416467  208512 pod_ready.go:92] pod "kube-proxy-d69xl" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.416499  208512 pod_ready.go:81] duration metric: took 38.448118ms for pod "kube-proxy-d69xl" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.416514  208512 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.815705  208512 pod_ready.go:92] pod "kube-scheduler-no-preload-767597" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:18.815741  208512 pod_ready.go:81] duration metric: took 399.219381ms for pod "kube-scheduler-no-preload-767597" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:18.815754  208512 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:20.824700  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:21.293581  208512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.504478529s)
	I0307 22:35:21.293675  208512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.327628481s)
	I0307 22:35:21.293800  208512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.508531507s)
	I0307 22:35:21.296005  208512 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-767597 addons enable metrics-server
	
	I0307 22:35:21.293712  208512 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.939531995s)
	I0307 22:35:21.298727  208512 addons.go:470] Verifying addon metrics-server=true in "no-preload-767597"
	I0307 22:35:21.303830  208512 out.go:177] * Enabled addons: storage-provisioner, dashboard, metrics-server, default-storageclass
	I0307 22:35:17.751535  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:20.250616  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:21.305624  208512 addons.go:505] duration metric: took 8.021656311s for enable addons: enabled=[storage-provisioner dashboard metrics-server default-storageclass]
	I0307 22:35:23.322953  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:25.323598  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:22.757325  203173 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:25.278867  203173 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.278895  203173 pod_ready.go:81] duration metric: took 1m11.035666367s for pod "kube-controller-manager-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.278908  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8s7l5" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.284398  203173 pod_ready.go:92] pod "kube-proxy-8s7l5" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.284421  203173 pod_ready.go:81] duration metric: took 5.505752ms for pod "kube-proxy-8s7l5" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.284432  203173 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.289813  203173 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace has status "Ready":"True"
	I0307 22:35:25.289838  203173 pod_ready.go:81] duration metric: took 5.397953ms for pod "kube-scheduler-old-k8s-version-497253" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:25.289850  203173 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace to be "Ready" ...
	I0307 22:35:27.296486  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:27.822479  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:30.322295  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:29.297074  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:31.796091  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:32.822330  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:34.822906  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:33.797140  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:35.801061  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:37.323060  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:39.822997  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:38.297272  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:40.796600  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:42.323261  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:44.822616  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:42.797115  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:45.298847  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:47.322250  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:49.322710  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:47.796663  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:50.296374  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:51.822111  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:54.322141  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:52.796187  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:54.796599  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:56.798445  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:56.322482  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:58.821628  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:35:59.295821  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:01.296396  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:00.821690  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:02.822220  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:05.322526  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:03.796625  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:06.296444  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:07.322791  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:09.822243  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:08.297407  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:10.796602  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:12.323306  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:14.822366  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:13.295686  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:15.296045  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:17.301566  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:17.322596  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:19.821419  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:19.796306  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:22.296441  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:21.822532  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:23.822861  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:24.796739  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:27.297453  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:26.322441  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:28.821852  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:29.300605  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:31.796632  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:30.822490  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:33.321736  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:35.323023  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:34.296019  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:36.296825  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:37.822341  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:39.822628  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:38.796557  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:40.797121  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:41.822738  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:44.321881  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:43.296360  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:45.298301  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:46.821848  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:48.823604  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:47.796686  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:50.295837  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:52.296530  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:51.322330  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:53.822168  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:54.297148  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:56.796796  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:56.322285  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:58.322762  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:00.324353  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:36:58.796999  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:01.296221  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:02.822525  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:05.322762  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:03.296654  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:05.297692  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:07.327637  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:09.822618  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:07.797249  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:10.296763  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:11.823794  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:14.322055  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:12.797250  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:14.798715  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:17.296697  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:16.322160  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:18.322304  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:19.297631  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:21.796680  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:20.823597  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:23.321913  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:25.322259  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:24.296142  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:26.326766  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:27.822749  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:29.822967  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:28.795966  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:30.796372  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:32.322380  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:34.822121  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:33.296568  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:35.296723  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:36.822226  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:39.321983  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:37.796345  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:40.295858  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:42.297400  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:41.322608  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:43.822624  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:44.796351  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:46.796401  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:46.321785  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:48.321869  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:48.797340  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:51.296470  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:50.822896  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:52.823106  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:55.322025  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:53.296690  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:55.298602  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:57.322728  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:59.822997  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:57.796372  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:37:59.796520  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:02.296249  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:02.322508  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:04.822317  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:04.796146  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:06.796971  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:07.322178  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:09.822101  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:09.296375  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:11.795721  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:11.823000  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:14.321873  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:13.801706  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:16.296885  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:16.321916  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:18.821728  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:18.298497  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:20.796022  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:20.822225  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:22.822800  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:24.823201  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:22.797461  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:25.296255  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:27.297096  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:26.824702  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:29.322418  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:29.796172  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:32.295946  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:31.822283  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:34.322435  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:34.795870  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:36.796339  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:36.322617  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:38.821766  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:38.796554  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:41.297458  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:40.821870  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:42.822216  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:45.323003  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:43.795600  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:45.797095  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:47.822420  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:49.824954  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:47.802184  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:50.296669  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:52.321790  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:54.322148  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:52.795972  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:55.296865  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:56.322286  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:58.323011  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:00.327819  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:38:57.795956  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:00.301337  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:02.822061  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:04.822313  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:02.796169  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:05.295485  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:07.296560  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:07.322963  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:09.822090  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:09.796355  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:12.295856  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:12.324257  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:14.822039  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:14.296046  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:16.796231  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:16.822121  208512 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:18.821707  208512 pod_ready.go:81] duration metric: took 4m0.005941928s for pod "metrics-server-57f55c9bc5-wgs49" in "kube-system" namespace to be "Ready" ...
	E0307 22:39:18.821734  208512 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 22:39:18.821746  208512 pod_ready.go:38] duration metric: took 4m0.610230786s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:39:18.821760  208512 api_server.go:52] waiting for apiserver process to appear ...
	I0307 22:39:18.821789  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:18.821851  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:18.869636  208512 cri.go:89] found id: "8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:18.869660  208512 cri.go:89] found id: "3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:18.869665  208512 cri.go:89] found id: ""
	I0307 22:39:18.869672  208512 logs.go:276] 2 containers: [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170]
	I0307 22:39:18.869750  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.874986  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.878367  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:18.878450  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:18.931485  208512 cri.go:89] found id: "8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:18.931516  208512 cri.go:89] found id: "61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:18.931521  208512 cri.go:89] found id: ""
	I0307 22:39:18.931529  208512 logs.go:276] 2 containers: [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89]
	I0307 22:39:18.931597  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.935728  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.939289  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:18.939367  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:18.983237  208512 cri.go:89] found id: "66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:18.983298  208512 cri.go:89] found id: "cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:18.983319  208512 cri.go:89] found id: ""
	I0307 22:39:18.983343  208512 logs.go:276] 2 containers: [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183]
	I0307 22:39:18.983416  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.987539  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:18.991066  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:18.991166  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:19.032889  208512 cri.go:89] found id: "ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:19.032910  208512 cri.go:89] found id: "da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:19.032915  208512 cri.go:89] found id: ""
	I0307 22:39:19.032922  208512 logs.go:276] 2 containers: [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe]
	I0307 22:39:19.032976  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.036597  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.039880  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:19.039952  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:19.089864  208512 cri.go:89] found id: "7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:19.089888  208512 cri.go:89] found id: "775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:19.089894  208512 cri.go:89] found id: ""
	I0307 22:39:19.089900  208512 logs.go:276] 2 containers: [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3]
	I0307 22:39:19.089959  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.093701  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.096965  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:19.097036  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:19.143660  208512 cri.go:89] found id: "bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:19.143681  208512 cri.go:89] found id: "a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:19.143686  208512 cri.go:89] found id: ""
	I0307 22:39:19.143693  208512 logs.go:276] 2 containers: [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a]
	I0307 22:39:19.143781  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.147396  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.150712  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:19.150808  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:19.187166  208512 cri.go:89] found id: "20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:19.187239  208512 cri.go:89] found id: "988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:19.187258  208512 cri.go:89] found id: ""
	I0307 22:39:19.187281  208512 logs.go:276] 2 containers: [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb]
	I0307 22:39:19.187359  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.191044  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.194510  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:19.194599  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:19.232458  208512 cri.go:89] found id: "15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:19.232491  208512 cri.go:89] found id: ""
	I0307 22:39:19.232500  208512 logs.go:276] 1 containers: [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e]
	I0307 22:39:19.232567  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.235934  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:19.236043  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:19.272434  208512 cri.go:89] found id: "ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:19.272457  208512 cri.go:89] found id: "3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:19.272462  208512 cri.go:89] found id: ""
	I0307 22:39:19.272469  208512 logs.go:276] 2 containers: [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8]
	I0307 22:39:19.272525  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.276097  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:19.279492  208512 logs.go:123] Gathering logs for kube-scheduler [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d] ...
	I0307 22:39:19.279527  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:19.335718  208512 logs.go:123] Gathering logs for kube-proxy [775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3] ...
	I0307 22:39:19.335749  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:19.373413  208512 logs.go:123] Gathering logs for kube-controller-manager [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91] ...
	I0307 22:39:19.373446  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:19.436765  208512 logs.go:123] Gathering logs for kube-controller-manager [a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a] ...
	I0307 22:39:19.436799  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:19.494641  208512 logs.go:123] Gathering logs for kubernetes-dashboard [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e] ...
	I0307 22:39:19.494712  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:19.540826  208512 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:19.540855  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:19.561869  208512 logs.go:123] Gathering logs for kube-apiserver [3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170] ...
	I0307 22:39:19.561898  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:19.610968  208512 logs.go:123] Gathering logs for coredns [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1] ...
	I0307 22:39:19.610998  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:19.662744  208512 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:19.662774  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:19.730097  208512 logs.go:123] Gathering logs for kindnet [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea] ...
	I0307 22:39:19.730132  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:19.778859  208512 logs.go:123] Gathering logs for kindnet [988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb] ...
	I0307 22:39:19.778889  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:19.823368  208512 logs.go:123] Gathering logs for storage-provisioner [3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8] ...
	I0307 22:39:19.823397  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:19.859667  208512 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:19.859702  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:20.001010  208512 logs.go:123] Gathering logs for etcd [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788] ...
	I0307 22:39:20.001044  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:20.068625  208512 logs.go:123] Gathering logs for etcd [61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89] ...
	I0307 22:39:20.068658  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:20.126808  208512 logs.go:123] Gathering logs for container status ...
	I0307 22:39:20.126847  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:20.181924  208512 logs.go:123] Gathering logs for kube-scheduler [da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe] ...
	I0307 22:39:20.181959  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:20.242842  208512 logs.go:123] Gathering logs for kube-proxy [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36] ...
	I0307 22:39:20.242877  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:20.283160  208512 logs.go:123] Gathering logs for storage-provisioner [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393] ...
	I0307 22:39:20.283190  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:20.334236  208512 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:20.334265  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 22:39:20.407784  208512 logs.go:123] Gathering logs for kube-apiserver [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b] ...
	I0307 22:39:20.407819  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:20.466820  208512 logs.go:123] Gathering logs for coredns [cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183] ...
	I0307 22:39:20.466857  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:18.796994  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:20.797675  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:23.017796  208512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:39:23.029818  208512 api_server.go:72] duration metric: took 4m9.746175934s to wait for apiserver process to appear ...
	I0307 22:39:23.029891  208512 api_server.go:88] waiting for apiserver healthz status ...
	I0307 22:39:23.029940  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:23.030034  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:23.075382  208512 cri.go:89] found id: "8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:23.075405  208512 cri.go:89] found id: "3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:23.075411  208512 cri.go:89] found id: ""
	I0307 22:39:23.075418  208512 logs.go:276] 2 containers: [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170]
	I0307 22:39:23.075483  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.079550  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.082825  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:23.082889  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:23.120086  208512 cri.go:89] found id: "8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:23.120109  208512 cri.go:89] found id: "61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:23.120115  208512 cri.go:89] found id: ""
	I0307 22:39:23.120123  208512 logs.go:276] 2 containers: [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89]
	I0307 22:39:23.120196  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.123687  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.127251  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:23.127409  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:23.167955  208512 cri.go:89] found id: "66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:23.168028  208512 cri.go:89] found id: "cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:23.168059  208512 cri.go:89] found id: ""
	I0307 22:39:23.168080  208512 logs.go:276] 2 containers: [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183]
	I0307 22:39:23.168170  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.172615  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.176516  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:23.176584  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:23.219510  208512 cri.go:89] found id: "ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:23.219529  208512 cri.go:89] found id: "da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:23.219534  208512 cri.go:89] found id: ""
	I0307 22:39:23.219541  208512 logs.go:276] 2 containers: [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe]
	I0307 22:39:23.219612  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.223195  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.226557  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:23.226673  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:23.269931  208512 cri.go:89] found id: "7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:23.270000  208512 cri.go:89] found id: "775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:23.270017  208512 cri.go:89] found id: ""
	I0307 22:39:23.270025  208512 logs.go:276] 2 containers: [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3]
	I0307 22:39:23.270093  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.273706  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.277042  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:23.277129  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:23.326962  208512 cri.go:89] found id: "bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:23.326983  208512 cri.go:89] found id: "a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:23.326988  208512 cri.go:89] found id: ""
	I0307 22:39:23.326995  208512 logs.go:276] 2 containers: [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a]
	I0307 22:39:23.327067  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.330781  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.334158  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:23.334266  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:23.372751  208512 cri.go:89] found id: "20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:23.372811  208512 cri.go:89] found id: "988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:23.372831  208512 cri.go:89] found id: ""
	I0307 22:39:23.372856  208512 logs.go:276] 2 containers: [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb]
	I0307 22:39:23.372927  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.377045  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.381497  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:23.381592  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:23.418727  208512 cri.go:89] found id: "ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:23.418789  208512 cri.go:89] found id: "3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:23.418811  208512 cri.go:89] found id: ""
	I0307 22:39:23.418826  208512 logs.go:276] 2 containers: [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8]
	I0307 22:39:23.418910  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.422597  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.426213  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:23.426279  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:23.469521  208512 cri.go:89] found id: "15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:23.469543  208512 cri.go:89] found id: ""
	I0307 22:39:23.469552  208512 logs.go:276] 1 containers: [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e]
	I0307 22:39:23.469605  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:23.473267  208512 logs.go:123] Gathering logs for storage-provisioner [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393] ...
	I0307 22:39:23.473328  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:23.511454  208512 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:23.511519  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 22:39:23.582902  208512 logs.go:123] Gathering logs for etcd [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788] ...
	I0307 22:39:23.582936  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:23.629314  208512 logs.go:123] Gathering logs for coredns [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1] ...
	I0307 22:39:23.629346  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:23.677252  208512 logs.go:123] Gathering logs for kube-controller-manager [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91] ...
	I0307 22:39:23.677281  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:23.754191  208512 logs.go:123] Gathering logs for coredns [cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183] ...
	I0307 22:39:23.754224  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:23.796160  208512 logs.go:123] Gathering logs for kube-scheduler [da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe] ...
	I0307 22:39:23.796190  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:23.846343  208512 logs.go:123] Gathering logs for kube-proxy [775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3] ...
	I0307 22:39:23.846376  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:23.893266  208512 logs.go:123] Gathering logs for kindnet [988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb] ...
	I0307 22:39:23.893295  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:23.932799  208512 logs.go:123] Gathering logs for storage-provisioner [3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8] ...
	I0307 22:39:23.932827  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:23.970809  208512 logs.go:123] Gathering logs for container status ...
	I0307 22:39:23.970839  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:24.030721  208512 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:24.030754  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:24.059034  208512 logs.go:123] Gathering logs for kube-proxy [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36] ...
	I0307 22:39:24.059066  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:24.101193  208512 logs.go:123] Gathering logs for kube-controller-manager [a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a] ...
	I0307 22:39:24.101227  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:24.160531  208512 logs.go:123] Gathering logs for kindnet [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea] ...
	I0307 22:39:24.160565  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:24.202320  208512 logs.go:123] Gathering logs for kube-scheduler [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d] ...
	I0307 22:39:24.202351  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:24.250518  208512 logs.go:123] Gathering logs for kubernetes-dashboard [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e] ...
	I0307 22:39:24.250547  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:24.298653  208512 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:24.298683  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:24.359323  208512 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:24.359360  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:24.483739  208512 logs.go:123] Gathering logs for kube-apiserver [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b] ...
	I0307 22:39:24.483774  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:24.533361  208512 logs.go:123] Gathering logs for kube-apiserver [3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170] ...
	I0307 22:39:24.533394  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:24.597407  208512 logs.go:123] Gathering logs for etcd [61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89] ...
	I0307 22:39:24.597442  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:23.296761  203173 pod_ready.go:102] pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace has status "Ready":"False"
	I0307 22:39:25.296213  203173 pod_ready.go:81] duration metric: took 4m0.00634836s for pod "metrics-server-9975d5f86-qmg9k" in "kube-system" namespace to be "Ready" ...
	E0307 22:39:25.296241  203173 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0307 22:39:25.296251  203173 pod_ready.go:38] duration metric: took 5m20.253260881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0307 22:39:25.296265  203173 api_server.go:52] waiting for apiserver process to appear ...
	I0307 22:39:25.296329  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:25.296395  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:25.336345  203173 cri.go:89] found id: "8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:25.336366  203173 cri.go:89] found id: "cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:25.336371  203173 cri.go:89] found id: ""
	I0307 22:39:25.336378  203173 logs.go:276] 2 containers: [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127]
	I0307 22:39:25.336438  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.340420  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.343703  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:25.343781  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:25.383563  203173 cri.go:89] found id: "14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:25.383585  203173 cri.go:89] found id: "dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:25.383596  203173 cri.go:89] found id: ""
	I0307 22:39:25.383604  203173 logs.go:276] 2 containers: [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c]
	I0307 22:39:25.383675  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.387470  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.390779  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:25.390841  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:25.437462  203173 cri.go:89] found id: "413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:25.437481  203173 cri.go:89] found id: "a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:25.437486  203173 cri.go:89] found id: ""
	I0307 22:39:25.437493  203173 logs.go:276] 2 containers: [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477]
	I0307 22:39:25.437547  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.441300  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.444236  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:25.444346  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:25.486461  203173 cri.go:89] found id: "157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:25.486519  203173 cri.go:89] found id: "2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:25.486538  203173 cri.go:89] found id: ""
	I0307 22:39:25.486552  203173 logs.go:276] 2 containers: [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc]
	I0307 22:39:25.486610  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.490758  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.494081  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:25.494191  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:25.533954  203173 cri.go:89] found id: "7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:25.534016  203173 cri.go:89] found id: "f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:25.534046  203173 cri.go:89] found id: ""
	I0307 22:39:25.534060  203173 logs.go:276] 2 containers: [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036]
	I0307 22:39:25.534121  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.537826  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.542894  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:25.543014  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:25.585812  203173 cri.go:89] found id: "9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:25.585836  203173 cri.go:89] found id: "bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:25.585842  203173 cri.go:89] found id: ""
	I0307 22:39:25.585855  203173 logs.go:276] 2 containers: [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1]
	I0307 22:39:25.585922  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.589442  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.592832  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:25.592923  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:25.636509  203173 cri.go:89] found id: "2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:25.636533  203173 cri.go:89] found id: "e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:25.636538  203173 cri.go:89] found id: ""
	I0307 22:39:25.636545  203173 logs.go:276] 2 containers: [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062]
	I0307 22:39:25.636619  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.640672  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.644632  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:25.644725  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:25.685799  203173 cri.go:89] found id: "778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:25.685822  203173 cri.go:89] found id: ""
	I0307 22:39:25.685830  203173 logs.go:276] 1 containers: [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86]
	I0307 22:39:25.685911  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.689934  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:25.690008  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:25.727726  203173 cri.go:89] found id: "426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:25.727753  203173 cri.go:89] found id: "f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:25.727758  203173 cri.go:89] found id: ""
	I0307 22:39:25.727766  203173 logs.go:276] 2 containers: [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906]
	I0307 22:39:25.727867  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.731423  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:25.735053  203173 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:25.735120  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:25.796794  203173 logs.go:123] Gathering logs for etcd [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d] ...
	I0307 22:39:25.796828  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:25.841590  203173 logs.go:123] Gathering logs for kube-scheduler [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67] ...
	I0307 22:39:25.841620  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:25.886268  203173 logs.go:123] Gathering logs for kube-proxy [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a] ...
	I0307 22:39:25.886343  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:25.924898  203173 logs.go:123] Gathering logs for kube-scheduler [2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc] ...
	I0307 22:39:25.924972  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:25.970416  203173 logs.go:123] Gathering logs for kindnet [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6] ...
	I0307 22:39:25.970447  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:26.015966  203173 logs.go:123] Gathering logs for kindnet [e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062] ...
	I0307 22:39:26.015998  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:26.070164  203173 logs.go:123] Gathering logs for kubernetes-dashboard [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86] ...
	I0307 22:39:26.070190  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:26.111136  203173 logs.go:123] Gathering logs for container status ...
	I0307 22:39:26.111163  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:26.163682  203173 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:26.163711  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:26.181702  203173 logs.go:123] Gathering logs for kube-apiserver [cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127] ...
	I0307 22:39:26.181732  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:26.246451  203173 logs.go:123] Gathering logs for coredns [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623] ...
	I0307 22:39:26.246483  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:26.295840  203173 logs.go:123] Gathering logs for kube-proxy [f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036] ...
	I0307 22:39:26.295868  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:26.341619  203173 logs.go:123] Gathering logs for kube-controller-manager [bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1] ...
	I0307 22:39:26.341646  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:26.408963  203173 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:26.408996  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 22:39:26.472507  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:05 old-k8s-version-497253 kubelet[660]: E0307 22:34:05.261951     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghqhd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghqhd" is forbidden: User "system:node:old-k8s-version-497253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-497253' and this object
	W0307 22:39:26.479496  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.162816     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.479691  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.277655     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.482575  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:22 old-k8s-version-497253 kubelet[660]: E0307 22:34:22.936411     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.484741  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:36 old-k8s-version-497253 kubelet[660]: E0307 22:34:36.920792     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.485205  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:37 old-k8s-version-497253 kubelet[660]: E0307 22:34:37.462923     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.485558  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:38 old-k8s-version-497253 kubelet[660]: E0307 22:34:38.465991     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.486058  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:39 old-k8s-version-497253 kubelet[660]: E0307 22:34:39.469998     660 pod_workers.go:191] Error syncing pod f477a573-d9fd-4235-84a6-32e52bea48e1 ("storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"
	W0307 22:39:26.486394  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:41 old-k8s-version-497253 kubelet[660]: E0307 22:34:41.449013     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.489348  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:51 old-k8s-version-497253 kubelet[660]: E0307 22:34:51.926704     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.489941  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:54 old-k8s-version-497253 kubelet[660]: E0307 22:34:54.505288     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490272  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:01 old-k8s-version-497253 kubelet[660]: E0307 22:35:01.448994     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490457  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:02 old-k8s-version-497253 kubelet[660]: E0307 22:35:02.915003     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.490785  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:12 old-k8s-version-497253 kubelet[660]: E0307 22:35:12.922852     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.490970  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:14 old-k8s-version-497253 kubelet[660]: E0307 22:35:14.914572     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.491560  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:25 old-k8s-version-497253 kubelet[660]: E0307 22:35:25.578672     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.491748  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:26 old-k8s-version-497253 kubelet[660]: E0307 22:35:26.918579     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.492079  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:31 old-k8s-version-497253 kubelet[660]: E0307 22:35:31.448901     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.494548  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:39 old-k8s-version-497253 kubelet[660]: E0307 22:35:39.938727     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.494876  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:46 old-k8s-version-497253 kubelet[660]: E0307 22:35:46.914538     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.495064  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:53 old-k8s-version-497253 kubelet[660]: E0307 22:35:53.914358     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.495392  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:00 old-k8s-version-497253 kubelet[660]: E0307 22:36:00.917774     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.495577  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:05 old-k8s-version-497253 kubelet[660]: E0307 22:36:05.914412     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.496168  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:15 old-k8s-version-497253 kubelet[660]: E0307 22:36:15.682987     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.496383  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:18 old-k8s-version-497253 kubelet[660]: E0307 22:36:18.914367     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.496715  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:21 old-k8s-version-497253 kubelet[660]: E0307 22:36:21.449525     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.496902  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:31 old-k8s-version-497253 kubelet[660]: E0307 22:36:31.914470     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.497228  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:33 old-k8s-version-497253 kubelet[660]: E0307 22:36:33.914221     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.497446  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:42 old-k8s-version-497253 kubelet[660]: E0307 22:36:42.914696     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.497795  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:45 old-k8s-version-497253 kubelet[660]: E0307 22:36:45.914100     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.497981  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:57 old-k8s-version-497253 kubelet[660]: E0307 22:36:57.914372     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.498309  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:58 old-k8s-version-497253 kubelet[660]: E0307 22:36:58.914093     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.498659  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.914874     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.501139  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.928532     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:26.501328  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:21 old-k8s-version-497253 kubelet[660]: E0307 22:37:21.914234     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.501658  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:23 old-k8s-version-497253 kubelet[660]: E0307 22:37:23.914014     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.501850  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:33 old-k8s-version-497253 kubelet[660]: E0307 22:37:33.914590     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.502442  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:37 old-k8s-version-497253 kubelet[660]: E0307 22:37:37.858788     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.502771  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:41 old-k8s-version-497253 kubelet[660]: E0307 22:37:41.449456     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.502959  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:46 old-k8s-version-497253 kubelet[660]: E0307 22:37:46.917492     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503286  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:56 old-k8s-version-497253 kubelet[660]: E0307 22:37:56.914065     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.503470  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:57 old-k8s-version-497253 kubelet[660]: E0307 22:37:57.915061     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503655  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:09 old-k8s-version-497253 kubelet[660]: E0307 22:38:09.914552     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.503988  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: E0307 22:38:11.913955     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.504175  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:23 old-k8s-version-497253 kubelet[660]: E0307 22:38:23.914295     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.504556  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: E0307 22:38:26.914365     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.504744  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:35 old-k8s-version-497253 kubelet[660]: E0307 22:38:35.914352     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.505086  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: E0307 22:38:38.914978     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.505272  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:48 old-k8s-version-497253 kubelet[660]: E0307 22:38:48.914308     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.505628  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.505961  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.506151  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.506484  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.506668  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:26.506680  203173 logs.go:123] Gathering logs for kube-apiserver [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29] ...
	I0307 22:39:26.506725  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:26.567581  203173 logs.go:123] Gathering logs for coredns [a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477] ...
	I0307 22:39:26.567616  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:26.617169  203173 logs.go:123] Gathering logs for storage-provisioner [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d] ...
	I0307 22:39:26.617197  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:26.667945  203173 logs.go:123] Gathering logs for storage-provisioner [f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906] ...
	I0307 22:39:26.667973  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:26.707720  203173 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:26.707752  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:26.855434  203173 logs.go:123] Gathering logs for etcd [dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c] ...
	I0307 22:39:26.855466  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:26.907427  203173 logs.go:123] Gathering logs for kube-controller-manager [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc] ...
	I0307 22:39:26.907454  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:26.977630  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:26.977706  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 22:39:26.977771  203173 out.go:239] X Problems detected in kubelet:
	W0307 22:39:26.977934  203173 out.go:239]   Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.977983  203173 out.go:239]   Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.978041  203173 out.go:239]   Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:26.978077  203173 out.go:239]   Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:26.978110  203173 out.go:239]   Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:26.978155  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:26.978179  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:39:27.145828  208512 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0307 22:39:27.153472  208512 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0307 22:39:27.154680  208512 api_server.go:141] control plane version: v1.29.0-rc.2
	I0307 22:39:27.154704  208512 api_server.go:131] duration metric: took 4.124792722s to wait for apiserver health ...
	I0307 22:39:27.154712  208512 system_pods.go:43] waiting for kube-system pods to appear ...
	I0307 22:39:27.154733  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:27.154795  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:27.193045  208512 cri.go:89] found id: "8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:27.193067  208512 cri.go:89] found id: "3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:27.193072  208512 cri.go:89] found id: ""
	I0307 22:39:27.193079  208512 logs.go:276] 2 containers: [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170]
	I0307 22:39:27.193139  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.196788  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.200449  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:27.200547  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:27.246704  208512 cri.go:89] found id: "8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:27.246727  208512 cri.go:89] found id: "61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:27.246732  208512 cri.go:89] found id: ""
	I0307 22:39:27.246739  208512 logs.go:276] 2 containers: [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89]
	I0307 22:39:27.246793  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.251566  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.255110  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:27.255188  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:27.295321  208512 cri.go:89] found id: "66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:27.295345  208512 cri.go:89] found id: "cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:27.295350  208512 cri.go:89] found id: ""
	I0307 22:39:27.295357  208512 logs.go:276] 2 containers: [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183]
	I0307 22:39:27.295410  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.299533  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.303015  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:27.303089  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:27.349488  208512 cri.go:89] found id: "ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:27.349512  208512 cri.go:89] found id: "da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:27.349521  208512 cri.go:89] found id: ""
	I0307 22:39:27.349529  208512 logs.go:276] 2 containers: [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe]
	I0307 22:39:27.349597  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.353355  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.356796  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:27.356881  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:27.405902  208512 cri.go:89] found id: "7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:27.405938  208512 cri.go:89] found id: "775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:27.405943  208512 cri.go:89] found id: ""
	I0307 22:39:27.405951  208512 logs.go:276] 2 containers: [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3]
	I0307 22:39:27.406059  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.410551  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.418132  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:27.418235  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:27.457792  208512 cri.go:89] found id: "bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:27.457873  208512 cri.go:89] found id: "a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:27.457885  208512 cri.go:89] found id: ""
	I0307 22:39:27.457894  208512 logs.go:276] 2 containers: [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a]
	I0307 22:39:27.457950  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.462384  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.465848  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:27.465922  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:27.505688  208512 cri.go:89] found id: "20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:27.505710  208512 cri.go:89] found id: "988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:27.505715  208512 cri.go:89] found id: ""
	I0307 22:39:27.505723  208512 logs.go:276] 2 containers: [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb]
	I0307 22:39:27.505777  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.510061  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.514254  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:27.514325  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:27.556909  208512 cri.go:89] found id: "15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:27.556943  208512 cri.go:89] found id: ""
	I0307 22:39:27.556952  208512 logs.go:276] 1 containers: [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e]
	I0307 22:39:27.557005  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.560920  208512 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:27.560999  208512 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:27.603029  208512 cri.go:89] found id: "ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:27.603055  208512 cri.go:89] found id: "3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:27.603061  208512 cri.go:89] found id: ""
	I0307 22:39:27.603068  208512 logs.go:276] 2 containers: [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8]
	I0307 22:39:27.603122  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.606583  208512 ssh_runner.go:195] Run: which crictl
	I0307 22:39:27.610185  208512 logs.go:123] Gathering logs for kube-scheduler [ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d] ...
	I0307 22:39:27.610219  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ddaf309f6c6682b8d155b529ee31ae4f0dce1a6a9763127da6df7fdbbd31a16d"
	I0307 22:39:27.659712  208512 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:27.659783  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:27.677490  208512 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:27.677519  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:27.801447  208512 logs.go:123] Gathering logs for kube-proxy [775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3] ...
	I0307 22:39:27.801480  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 775c4272397dc700a1daf97c4e663e7fc3f71729356cee17bd4235ddf6daa2a3"
	I0307 22:39:27.848958  208512 logs.go:123] Gathering logs for kindnet [988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb] ...
	I0307 22:39:27.848985  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 988c7824310b34904cc0fcc89dceab6a97539faa8b8f508cbcbbdf112c8521fb"
	I0307 22:39:27.889814  208512 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:27.889843  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0307 22:39:27.965467  208512 logs.go:123] Gathering logs for kube-apiserver [8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b] ...
	I0307 22:39:27.965507  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8538ff6a2530eeddd6961ceeea4e8b8f31e001b211f6b0e42b463f41bfc8eb9b"
	I0307 22:39:28.028155  208512 logs.go:123] Gathering logs for kube-proxy [7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36] ...
	I0307 22:39:28.028194  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c807963c5849afb9cda04a10c0ba18eb4139242b1caf7d11205fa10706faf36"
	I0307 22:39:28.073330  208512 logs.go:123] Gathering logs for kube-controller-manager [a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a] ...
	I0307 22:39:28.073357  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a52fdbe185cf6604953f1db9bf33ddda100e7929d29ddd019abb2c4f628ccf8a"
	I0307 22:39:28.150355  208512 logs.go:123] Gathering logs for kindnet [20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea] ...
	I0307 22:39:28.150387  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20752f16c26ce3f933b39f1e042151aa9c6c927a788b023f74cd5e26eb4e9dea"
	I0307 22:39:28.194285  208512 logs.go:123] Gathering logs for kubernetes-dashboard [15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e] ...
	I0307 22:39:28.194316  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15345923272f30bf45fa6edcd96846838ad863ca72c9fe79a0ec5ca8d5c28a7e"
	I0307 22:39:28.233518  208512 logs.go:123] Gathering logs for storage-provisioner [3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8] ...
	I0307 22:39:28.233551  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e0ca1a11903bcbc026af5c35f9936a9948e4225c5df356b916422ffc52a02b8"
	I0307 22:39:28.275721  208512 logs.go:123] Gathering logs for container status ...
	I0307 22:39:28.275754  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:28.327836  208512 logs.go:123] Gathering logs for kube-apiserver [3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170] ...
	I0307 22:39:28.327868  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3ad41b9769064eab5e8003ea9de4b5b9f58af6a4d81413a5083b9b5abc308170"
	I0307 22:39:28.374408  208512 logs.go:123] Gathering logs for kube-scheduler [da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe] ...
	I0307 22:39:28.374441  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da7081019e6f16e832848ada7b2e006667f931fc538edab6b1ebd551fa208cbe"
	I0307 22:39:28.420692  208512 logs.go:123] Gathering logs for coredns [66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1] ...
	I0307 22:39:28.420724  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c0213f3c233ac7b951a4b86aab5ef89cc56a2ca5499180954b801d7ed742d1"
	I0307 22:39:28.461500  208512 logs.go:123] Gathering logs for coredns [cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183] ...
	I0307 22:39:28.461530  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd25d5f2cd96fffd3dc549470243d20ea7e47ef4dcd423227a4a52724f521183"
	I0307 22:39:28.498755  208512 logs.go:123] Gathering logs for kube-controller-manager [bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91] ...
	I0307 22:39:28.498786  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf0796c3bdeb1fb7cbeadd653a03514f86b475a0faf4571351606a60b1da8a91"
	I0307 22:39:28.559697  208512 logs.go:123] Gathering logs for storage-provisioner [ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393] ...
	I0307 22:39:28.559734  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac29cd6286420edd0596fdc8868b3ee5d6b7b98bc806044fb91e4e443a93b393"
	I0307 22:39:28.606479  208512 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:28.606506  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:28.678573  208512 logs.go:123] Gathering logs for etcd [8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788] ...
	I0307 22:39:28.678619  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3da2ccfc6f3e03a090f259d264c023c0220868e0038f3aab52d0bee1e0f788"
	I0307 22:39:28.728077  208512 logs.go:123] Gathering logs for etcd [61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89] ...
	I0307 22:39:28.728113  208512 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61d3947cf55d680139b701ec61051f9ada8b04b18e981f05adeac9a4a7a67d89"
	I0307 22:39:31.284024  208512 system_pods.go:59] 9 kube-system pods found
	I0307 22:39:31.284056  208512 system_pods.go:61] "coredns-76f75df574-w5rmw" [0251e6d2-6b13-4f2a-be14-0f5f5c9f2d06] Running
	I0307 22:39:31.284062  208512 system_pods.go:61] "etcd-no-preload-767597" [0c594181-bb51-41e5-81fa-757eb754f8c4] Running
	I0307 22:39:31.284067  208512 system_pods.go:61] "kindnet-pgvkb" [af4ef8bb-b524-4d21-9b05-fae30994c410] Running
	I0307 22:39:31.284071  208512 system_pods.go:61] "kube-apiserver-no-preload-767597" [49192b5e-9e73-458b-9662-7fdc410ccc89] Running
	I0307 22:39:31.284075  208512 system_pods.go:61] "kube-controller-manager-no-preload-767597" [7f49d455-b0be-43fc-aed2-ea65addd5aaa] Running
	I0307 22:39:31.284079  208512 system_pods.go:61] "kube-proxy-d69xl" [2b00fe4a-4082-49b2-afac-955aa21f2d85] Running
	I0307 22:39:31.284083  208512 system_pods.go:61] "kube-scheduler-no-preload-767597" [e5152d4a-e849-4097-b447-6de2df64196a] Running
	I0307 22:39:31.284090  208512 system_pods.go:61] "metrics-server-57f55c9bc5-wgs49" [87cee0d4-cd14-4683-8b66-946fbd076723] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 22:39:31.284097  208512 system_pods.go:61] "storage-provisioner" [dd205484-4bb1-4fe8-b527-1e8597023472] Running
	I0307 22:39:31.284104  208512 system_pods.go:74] duration metric: took 4.129385401s to wait for pod list to return data ...
	I0307 22:39:31.284120  208512 default_sa.go:34] waiting for default service account to be created ...
	I0307 22:39:31.287403  208512 default_sa.go:45] found service account: "default"
	I0307 22:39:31.287431  208512 default_sa.go:55] duration metric: took 3.30235ms for default service account to be created ...
	I0307 22:39:31.287442  208512 system_pods.go:116] waiting for k8s-apps to be running ...
	I0307 22:39:31.293986  208512 system_pods.go:86] 9 kube-system pods found
	I0307 22:39:31.294017  208512 system_pods.go:89] "coredns-76f75df574-w5rmw" [0251e6d2-6b13-4f2a-be14-0f5f5c9f2d06] Running
	I0307 22:39:31.294024  208512 system_pods.go:89] "etcd-no-preload-767597" [0c594181-bb51-41e5-81fa-757eb754f8c4] Running
	I0307 22:39:31.294028  208512 system_pods.go:89] "kindnet-pgvkb" [af4ef8bb-b524-4d21-9b05-fae30994c410] Running
	I0307 22:39:31.294032  208512 system_pods.go:89] "kube-apiserver-no-preload-767597" [49192b5e-9e73-458b-9662-7fdc410ccc89] Running
	I0307 22:39:31.294037  208512 system_pods.go:89] "kube-controller-manager-no-preload-767597" [7f49d455-b0be-43fc-aed2-ea65addd5aaa] Running
	I0307 22:39:31.294042  208512 system_pods.go:89] "kube-proxy-d69xl" [2b00fe4a-4082-49b2-afac-955aa21f2d85] Running
	I0307 22:39:31.294047  208512 system_pods.go:89] "kube-scheduler-no-preload-767597" [e5152d4a-e849-4097-b447-6de2df64196a] Running
	I0307 22:39:31.294054  208512 system_pods.go:89] "metrics-server-57f55c9bc5-wgs49" [87cee0d4-cd14-4683-8b66-946fbd076723] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0307 22:39:31.294062  208512 system_pods.go:89] "storage-provisioner" [dd205484-4bb1-4fe8-b527-1e8597023472] Running
	I0307 22:39:31.294071  208512 system_pods.go:126] duration metric: took 6.624376ms to wait for k8s-apps to be running ...
	I0307 22:39:31.294087  208512 system_svc.go:44] waiting for kubelet service to be running ....
	I0307 22:39:31.294145  208512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:39:31.306384  208512 system_svc.go:56] duration metric: took 12.287108ms WaitForService to wait for kubelet
	I0307 22:39:31.306412  208512 kubeadm.go:576] duration metric: took 4m18.022773321s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0307 22:39:31.306434  208512 node_conditions.go:102] verifying NodePressure condition ...
	I0307 22:39:31.310148  208512 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0307 22:39:31.310179  208512 node_conditions.go:123] node cpu capacity is 2
	I0307 22:39:31.310191  208512 node_conditions.go:105] duration metric: took 3.751234ms to run NodePressure ...
	I0307 22:39:31.310203  208512 start.go:240] waiting for startup goroutines ...
	I0307 22:39:31.310210  208512 start.go:245] waiting for cluster config update ...
	I0307 22:39:31.310221  208512 start.go:254] writing updated cluster config ...
	I0307 22:39:31.310521  208512 ssh_runner.go:195] Run: rm -f paused
	I0307 22:39:31.378749  208512 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0307 22:39:31.382202  208512 out.go:177] * Done! kubectl is now configured to use "no-preload-767597" cluster and "default" namespace by default
	I0307 22:39:36.979366  203173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:39:36.991646  203173 api_server.go:72] duration metric: took 5m52.144665067s to wait for apiserver process to appear ...
	I0307 22:39:36.991674  203173 api_server.go:88] waiting for apiserver healthz status ...
	I0307 22:39:36.991726  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0307 22:39:36.991797  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0307 22:39:37.043778  203173 cri.go:89] found id: "8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:37.043817  203173 cri.go:89] found id: "cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:37.043823  203173 cri.go:89] found id: ""
	I0307 22:39:37.043831  203173 logs.go:276] 2 containers: [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127]
	I0307 22:39:37.043902  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.049541  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.054684  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0307 22:39:37.054809  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0307 22:39:37.100838  203173 cri.go:89] found id: "14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:37.100860  203173 cri.go:89] found id: "dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:37.100865  203173 cri.go:89] found id: ""
	I0307 22:39:37.100873  203173 logs.go:276] 2 containers: [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c]
	I0307 22:39:37.100932  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.104646  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.108148  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0307 22:39:37.108223  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0307 22:39:37.151349  203173 cri.go:89] found id: "413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:37.151380  203173 cri.go:89] found id: "a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:37.151386  203173 cri.go:89] found id: ""
	I0307 22:39:37.151393  203173 logs.go:276] 2 containers: [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477]
	I0307 22:39:37.151449  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.155102  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.158773  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0307 22:39:37.158875  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0307 22:39:37.197623  203173 cri.go:89] found id: "157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:37.197647  203173 cri.go:89] found id: "2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:37.197652  203173 cri.go:89] found id: ""
	I0307 22:39:37.197659  203173 logs.go:276] 2 containers: [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc]
	I0307 22:39:37.197711  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.201264  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.204706  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0307 22:39:37.204782  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0307 22:39:37.244484  203173 cri.go:89] found id: "7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:37.244508  203173 cri.go:89] found id: "f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:37.244513  203173 cri.go:89] found id: ""
	I0307 22:39:37.244520  203173 logs.go:276] 2 containers: [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036]
	I0307 22:39:37.244580  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.248425  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.252613  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0307 22:39:37.252752  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0307 22:39:37.296151  203173 cri.go:89] found id: "9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:37.296177  203173 cri.go:89] found id: "bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:37.296182  203173 cri.go:89] found id: ""
	I0307 22:39:37.296189  203173 logs.go:276] 2 containers: [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1]
	I0307 22:39:37.296344  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.300120  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.303560  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0307 22:39:37.303657  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0307 22:39:37.341312  203173 cri.go:89] found id: "2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:37.341335  203173 cri.go:89] found id: "e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:37.341341  203173 cri.go:89] found id: ""
	I0307 22:39:37.341348  203173 logs.go:276] 2 containers: [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062]
	I0307 22:39:37.341404  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.344938  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.348625  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0307 22:39:37.348737  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0307 22:39:37.386014  203173 cri.go:89] found id: "426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:37.386037  203173 cri.go:89] found id: "f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:37.386042  203173 cri.go:89] found id: ""
	I0307 22:39:37.386049  203173 logs.go:276] 2 containers: [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906]
	I0307 22:39:37.386141  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.390014  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.393469  203173 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0307 22:39:37.393596  203173 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0307 22:39:37.459166  203173 cri.go:89] found id: "778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:37.459191  203173 cri.go:89] found id: ""
	I0307 22:39:37.459199  203173 logs.go:276] 1 containers: [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86]
	I0307 22:39:37.459285  203173 ssh_runner.go:195] Run: which crictl
	I0307 22:39:37.463408  203173 logs.go:123] Gathering logs for describe nodes ...
	I0307 22:39:37.463433  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0307 22:39:37.633708  203173 logs.go:123] Gathering logs for kube-apiserver [cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127] ...
	I0307 22:39:37.633777  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127"
	I0307 22:39:37.699015  203173 logs.go:123] Gathering logs for etcd [dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c] ...
	I0307 22:39:37.699048  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c"
	I0307 22:39:37.740792  203173 logs.go:123] Gathering logs for kube-scheduler [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67] ...
	I0307 22:39:37.740824  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67"
	I0307 22:39:37.793158  203173 logs.go:123] Gathering logs for kindnet [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6] ...
	I0307 22:39:37.793186  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6"
	I0307 22:39:37.845822  203173 logs.go:123] Gathering logs for kubelet ...
	I0307 22:39:37.845849  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0307 22:39:37.901312  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:05 old-k8s-version-497253 kubelet[660]: E0307 22:34:05.261951     660 reflector.go:138] object-"kube-system"/"kindnet-token-ghqhd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-ghqhd" is forbidden: User "system:node:old-k8s-version-497253" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-497253' and this object
	W0307 22:39:37.908351  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.162816     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.908549  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:07 old-k8s-version-497253 kubelet[660]: E0307 22:34:07.277655     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.911329  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:22 old-k8s-version-497253 kubelet[660]: E0307 22:34:22.936411     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.913515  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:36 old-k8s-version-497253 kubelet[660]: E0307 22:34:36.920792     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.913980  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:37 old-k8s-version-497253 kubelet[660]: E0307 22:34:37.462923     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.914311  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:38 old-k8s-version-497253 kubelet[660]: E0307 22:34:38.465991     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.914752  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:39 old-k8s-version-497253 kubelet[660]: E0307 22:34:39.469998     660 pod_workers.go:191] Error syncing pod f477a573-d9fd-4235-84a6-32e52bea48e1 ("storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(f477a573-d9fd-4235-84a6-32e52bea48e1)"
	W0307 22:39:37.915080  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:41 old-k8s-version-497253 kubelet[660]: E0307 22:34:41.449013     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.918016  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:51 old-k8s-version-497253 kubelet[660]: E0307 22:34:51.926704     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.918607  203173 logs.go:138] Found kubelet problem: Mar 07 22:34:54 old-k8s-version-497253 kubelet[660]: E0307 22:34:54.505288     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.918933  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:01 old-k8s-version-497253 kubelet[660]: E0307 22:35:01.448994     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.919119  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:02 old-k8s-version-497253 kubelet[660]: E0307 22:35:02.915003     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.919448  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:12 old-k8s-version-497253 kubelet[660]: E0307 22:35:12.922852     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.919635  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:14 old-k8s-version-497253 kubelet[660]: E0307 22:35:14.914572     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.920224  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:25 old-k8s-version-497253 kubelet[660]: E0307 22:35:25.578672     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.920418  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:26 old-k8s-version-497253 kubelet[660]: E0307 22:35:26.918579     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.920747  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:31 old-k8s-version-497253 kubelet[660]: E0307 22:35:31.448901     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.923205  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:39 old-k8s-version-497253 kubelet[660]: E0307 22:35:39.938727     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.923532  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:46 old-k8s-version-497253 kubelet[660]: E0307 22:35:46.914538     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.923717  203173 logs.go:138] Found kubelet problem: Mar 07 22:35:53 old-k8s-version-497253 kubelet[660]: E0307 22:35:53.914358     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.924047  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:00 old-k8s-version-497253 kubelet[660]: E0307 22:36:00.917774     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.924231  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:05 old-k8s-version-497253 kubelet[660]: E0307 22:36:05.914412     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.924825  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:15 old-k8s-version-497253 kubelet[660]: E0307 22:36:15.682987     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.925010  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:18 old-k8s-version-497253 kubelet[660]: E0307 22:36:18.914367     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.925336  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:21 old-k8s-version-497253 kubelet[660]: E0307 22:36:21.449525     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.925522  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:31 old-k8s-version-497253 kubelet[660]: E0307 22:36:31.914470     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.925856  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:33 old-k8s-version-497253 kubelet[660]: E0307 22:36:33.914221     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.926040  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:42 old-k8s-version-497253 kubelet[660]: E0307 22:36:42.914696     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.926366  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:45 old-k8s-version-497253 kubelet[660]: E0307 22:36:45.914100     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.926550  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:57 old-k8s-version-497253 kubelet[660]: E0307 22:36:57.914372     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.926879  203173 logs.go:138] Found kubelet problem: Mar 07 22:36:58 old-k8s-version-497253 kubelet[660]: E0307 22:36:58.914093     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.927206  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.914874     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.929692  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:10 old-k8s-version-497253 kubelet[660]: E0307 22:37:10.928532     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0307 22:39:37.929880  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:21 old-k8s-version-497253 kubelet[660]: E0307 22:37:21.914234     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.930207  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:23 old-k8s-version-497253 kubelet[660]: E0307 22:37:23.914014     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.930390  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:33 old-k8s-version-497253 kubelet[660]: E0307 22:37:33.914590     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.930976  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:37 old-k8s-version-497253 kubelet[660]: E0307 22:37:37.858788     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.931303  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:41 old-k8s-version-497253 kubelet[660]: E0307 22:37:41.449456     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.931488  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:46 old-k8s-version-497253 kubelet[660]: E0307 22:37:46.917492     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.931817  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:56 old-k8s-version-497253 kubelet[660]: E0307 22:37:56.914065     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.932001  203173 logs.go:138] Found kubelet problem: Mar 07 22:37:57 old-k8s-version-497253 kubelet[660]: E0307 22:37:57.915061     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.932185  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:09 old-k8s-version-497253 kubelet[660]: E0307 22:38:09.914552     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.932518  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: E0307 22:38:11.913955     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.932704  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:23 old-k8s-version-497253 kubelet[660]: E0307 22:38:23.914295     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.933032  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: E0307 22:38:26.914365     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.933216  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:35 old-k8s-version-497253 kubelet[660]: E0307 22:38:35.914352     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.933542  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: E0307 22:38:38.914978     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.933726  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:48 old-k8s-version-497253 kubelet[660]: E0307 22:38:48.914308     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.934052  203173 logs.go:138] Found kubelet problem: Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.934378  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.934564  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.934899  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.935085  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:37.935411  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:37.935596  203173 logs.go:138] Found kubelet problem: Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:37.935605  203173 logs.go:123] Gathering logs for etcd [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d] ...
	I0307 22:39:37.935619  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d"
	I0307 22:39:37.997943  203173 logs.go:123] Gathering logs for kube-controller-manager [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc] ...
	I0307 22:39:37.997972  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc"
	I0307 22:39:38.115147  203173 logs.go:123] Gathering logs for dmesg ...
	I0307 22:39:38.115183  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0307 22:39:38.133346  203173 logs.go:123] Gathering logs for coredns [a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477] ...
	I0307 22:39:38.133379  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477"
	I0307 22:39:38.172233  203173 logs.go:123] Gathering logs for kube-scheduler [2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc] ...
	I0307 22:39:38.172264  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc"
	I0307 22:39:38.215015  203173 logs.go:123] Gathering logs for kube-proxy [f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036] ...
	I0307 22:39:38.215047  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036"
	I0307 22:39:38.253945  203173 logs.go:123] Gathering logs for kindnet [e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062] ...
	I0307 22:39:38.253974  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062"
	I0307 22:39:38.298086  203173 logs.go:123] Gathering logs for storage-provisioner [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d] ...
	I0307 22:39:38.298117  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d"
	I0307 22:39:38.343935  203173 logs.go:123] Gathering logs for coredns [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623] ...
	I0307 22:39:38.343969  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623"
	I0307 22:39:38.382349  203173 logs.go:123] Gathering logs for kube-proxy [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a] ...
	I0307 22:39:38.382378  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a"
	I0307 22:39:38.427494  203173 logs.go:123] Gathering logs for kube-controller-manager [bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1] ...
	I0307 22:39:38.427570  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1"
	I0307 22:39:38.491465  203173 logs.go:123] Gathering logs for storage-provisioner [f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906] ...
	I0307 22:39:38.491536  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906"
	I0307 22:39:38.532531  203173 logs.go:123] Gathering logs for kubernetes-dashboard [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86] ...
	I0307 22:39:38.532564  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86"
	I0307 22:39:38.577829  203173 logs.go:123] Gathering logs for containerd ...
	I0307 22:39:38.577862  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0307 22:39:38.638480  203173 logs.go:123] Gathering logs for container status ...
	I0307 22:39:38.638517  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0307 22:39:38.693638  203173 logs.go:123] Gathering logs for kube-apiserver [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29] ...
	I0307 22:39:38.693667  203173 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29"
	I0307 22:39:38.764704  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:38.764734  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0307 22:39:38.764785  203173 out.go:239] X Problems detected in kubelet:
	W0307 22:39:38.764795  203173 out.go:239]   Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:38.764803  203173 out.go:239]   Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:38.764817  203173 out.go:239]   Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0307 22:39:38.764824  203173 out.go:239]   Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	W0307 22:39:38.764836  203173 out.go:239]   Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0307 22:39:38.764843  203173 out.go:304] Setting ErrFile to fd 2...
	I0307 22:39:38.764849  203173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:39:48.764963  203173 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0307 22:39:48.777388  203173 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0307 22:39:48.794117  203173 out.go:177] 
	W0307 22:39:48.797346  203173 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0307 22:39:48.797390  203173 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0307 22:39:48.797410  203173 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0307 22:39:48.797416  203173 out.go:239] * 
	W0307 22:39:48.798329  203173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0307 22:39:48.800764  203173 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	4436cb630d957       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   44d01011917d1       dashboard-metrics-scraper-8d5bb5db8-85hs8
	426c41631cb95       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   6653df91645aa       storage-provisioner
	778e16c451c2e       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   991993383d4fc       kubernetes-dashboard-cd95d586-dsjvb
	2543aa094e873       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   e61176bb8cd1b       kindnet-wjf8t
	8348ba5b7e636       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   5aeff1b1c30c7       busybox
	f98ff695720f5       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   6653df91645aa       storage-provisioner
	7998095a37dd2       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   7039632b6e451       kube-proxy-8s7l5
	413452dbe13bf       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   f9ab0aadba161       coredns-74ff55c5b-jjsjg
	14afb08fcf033       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   a068fb6a57558       etcd-old-k8s-version-497253
	157ece61e2d99       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   9152ada8c00b2       kube-scheduler-old-k8s-version-497253
	9ba0a9f80bc7b       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   09141809c0920       kube-controller-manager-old-k8s-version-497253
	8b2ed0287cc5b       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   6b1c4b2e351b8       kube-apiserver-old-k8s-version-497253
	643b7fa0bae9b       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   5eee39290bc57       busybox
	a105c02644c9c       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   05292a255b2c5       coredns-74ff55c5b-jjsjg
	e9768721745ab       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   2019cc751ea77       kindnet-wjf8t
	f1df99d1a705a       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   4d0a6e2e10291       kube-proxy-8s7l5
	cc2db588ff857       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   9d722cf6ff2d1       kube-apiserver-old-k8s-version-497253
	2ca89421ca012       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   96e4810a8ace9       kube-scheduler-old-k8s-version-497253
	bd7a60d8d4f26       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   cbadaf7de17c5       kube-controller-manager-old-k8s-version-497253
	dabbb0b4fd257       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   ccdc32fabb99f       etcd-old-k8s-version-497253
	
	
	==> containerd <==
	Mar 07 22:35:39 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:35:39.923699498Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 07 22:35:39 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:35:39.936962378Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 07 22:36:14 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:14.917347669Z" level=info msg="CreateContainer within sandbox \"44d01011917d15f9ec9eeccbe3872195fe11cfde5e4a9fe132e59ede77a89141\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 07 22:36:14 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:14.935092775Z" level=info msg="CreateContainer within sandbox \"44d01011917d15f9ec9eeccbe3872195fe11cfde5e4a9fe132e59ede77a89141\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174\""
	Mar 07 22:36:14 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:14.935826391Z" level=info msg="StartContainer for \"d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174\""
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.013462514Z" level=info msg="StartContainer for \"d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174\" returns successfully"
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.055714912Z" level=info msg="shim disconnected" id=d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.055785140Z" level=warning msg="cleaning up after shim disconnected" id=d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174 namespace=k8s.io
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.055798087Z" level=info msg="cleaning up dead shim"
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.064581147Z" level=warning msg="cleanup warnings time=\"2024-03-07T22:36:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3005 runtime=io.containerd.runc.v2\n"
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.685192143Z" level=info msg="RemoveContainer for \"b892029056eaffad6ab44b5906e57d535688d742cae5b351706314b99ed92d03\""
	Mar 07 22:36:15 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:36:15.692914974Z" level=info msg="RemoveContainer for \"b892029056eaffad6ab44b5906e57d535688d742cae5b351706314b99ed92d03\" returns successfully"
	Mar 07 22:37:10 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:10.918630972Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:37:10 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:10.924106833Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 07 22:37:10 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:10.926122068Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 07 22:37:36 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:36.918555423Z" level=info msg="CreateContainer within sandbox \"44d01011917d15f9ec9eeccbe3872195fe11cfde5e4a9fe132e59ede77a89141\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 07 22:37:36 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:36.941292093Z" level=info msg="CreateContainer within sandbox \"44d01011917d15f9ec9eeccbe3872195fe11cfde5e4a9fe132e59ede77a89141\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b\""
	Mar 07 22:37:36 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:36.941955991Z" level=info msg="StartContainer for \"4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b\""
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.005805825Z" level=info msg="StartContainer for \"4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b\" returns successfully"
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.051827230Z" level=info msg="shim disconnected" id=4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.051996288Z" level=warning msg="cleaning up after shim disconnected" id=4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b namespace=k8s.io
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.052015234Z" level=info msg="cleaning up dead shim"
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.060560182Z" level=warning msg="cleanup warnings time=\"2024-03-07T22:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3235 runtime=io.containerd.runc.v2\n"
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.877927721Z" level=info msg="RemoveContainer for \"d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174\""
	Mar 07 22:37:37 old-k8s-version-497253 containerd[568]: time="2024-03-07T22:37:37.883426474Z" level=info msg="RemoveContainer for \"d5b1fc3d305785db148edcf910073adc71bfca8139c58401f9a2e5075152e174\" returns successfully"
	
	
	==> coredns [413452dbe13bfb277303d92484791aa53ae9a198a089364135d2347bf8475623] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57871 - 9601 "HINFO IN 2553362965738671866.3497325676005983756. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02496585s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0307 22:34:38.573711       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 22:34:08.573113987 +0000 UTC m=+0.032962542) (total time: 30.000503476s):
	Trace[2019727887]: [30.000503476s] [30.000503476s] END
	E0307 22:34:38.573740       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 22:34:38.574118       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 22:34:08.573664548 +0000 UTC m=+0.033513095) (total time: 30.000436211s):
	Trace[939984059]: [30.000436211s] [30.000436211s] END
	E0307 22:34:38.574194       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0307 22:34:38.574372       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-07 22:34:08.57390188 +0000 UTC m=+0.033750435) (total time: 30.000458668s):
	Trace[911902081]: [30.000458668s] [30.000458668s] END
	E0307 22:34:38.574420       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a105c02644c9c901090c4c9792a32b0396cee09395513751eeb1be7fbf4b8477] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:46430 - 30382 "HINFO IN 886684529790920993.5393081510785046323. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022716021s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-497253
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-497253
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3e3656b8cff33aafa60dd2a07a4b34bce666a6a6
	                    minikube.k8s.io/name=old-k8s-version-497253
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_07T22_31_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Mar 2024 22:31:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-497253
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Mar 2024 22:39:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Mar 2024 22:34:56 +0000   Thu, 07 Mar 2024 22:31:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Mar 2024 22:34:56 +0000   Thu, 07 Mar 2024 22:31:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Mar 2024 22:34:56 +0000   Thu, 07 Mar 2024 22:31:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Mar 2024 22:34:56 +0000   Thu, 07 Mar 2024 22:31:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-497253
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5a8e0ea18bc41b3af60da2f00850b6e
	  System UUID:                c3622802-fd6d-46e5-9412-eb136d546444
	  Boot ID:                    5a38287e-066f-43b8-a303-a60cdb318f8a
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m39s
	  kube-system                 coredns-74ff55c5b-jjsjg                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m3s
	  kube-system                 etcd-old-k8s-version-497253                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m11s
	  kube-system                 kindnet-wjf8t                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m3s
	  kube-system                 kube-apiserver-old-k8s-version-497253             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-controller-manager-old-k8s-version-497253    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-proxy-8s7l5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m3s
	  kube-system                 kube-scheduler-old-k8s-version-497253             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 metrics-server-9975d5f86-qmg9k                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m28s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-85hs8         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-dsjvb               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m31s (x4 over 8m31s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m31s (x5 over 8m31s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m31s (x4 over 8m31s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m12s                  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s                  kubelet     Node old-k8s-version-497253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s                  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m3s                   kubelet     Node old-k8s-version-497253 status is now: NodeReady
	  Normal  Starting                 8m2s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m59s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x7 over 5m59s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x8 over 5m59s)  kubelet     Node old-k8s-version-497253 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m42s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000972] FS-Cache: N-cookie d=000000000f302b63{9p.inode} n=00000000837bca78
	[  +0.001086] FS-Cache: N-key=[8] '8f385c0100000000'
	[  +0.003138] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=000000000f302b63{9p.inode} n=00000000f001752e
	[  +0.001185] FS-Cache: O-key=[8] '8f385c0100000000'
	[  +0.000754] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=000000000f302b63{9p.inode} n=0000000098f86bc0
	[  +0.001117] FS-Cache: N-key=[8] '8f385c0100000000'
	[  +2.904901] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001027] FS-Cache: O-cookie d=000000000f302b63{9p.inode} n=00000000eacda38f
	[  +0.001143] FS-Cache: O-key=[8] '8e385c0100000000'
	[  +0.000817] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.001005] FS-Cache: N-cookie d=000000000f302b63{9p.inode} n=00000000837bca78
	[  +0.001169] FS-Cache: N-key=[8] '8e385c0100000000'
	[  +0.298622] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001098] FS-Cache: O-cookie d=000000000f302b63{9p.inode} n=00000000804cecc5
	[  +0.001206] FS-Cache: O-key=[8] '96385c0100000000'
	[  +0.000727] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000984] FS-Cache: N-cookie d=000000000f302b63{9p.inode} n=000000001811317b
	[  +0.001130] FS-Cache: N-key=[8] '96385c0100000000'
	[Mar 7 22:00] hrtimer: interrupt took 15664673 ns
	[Mar 7 22:23] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [14afb08fcf0333a1d360d000f1fa3b158ae712b1b2a3e6c52b164c05245f920d] <==
	2024-03-07 22:35:48.330400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:35:58.330224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:08.330314 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:18.330234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:28.330381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:38.330334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:48.330315 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:36:58.330444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:08.330431 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:18.330276 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:28.330453 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:38.330270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:48.330977 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:37:58.330358 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:08.330352 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:18.330345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:28.330410 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:38.330780 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:48.330333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:38:58.330300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:39:08.330422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:39:18.330484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:39:28.330393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:39:38.330533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:39:48.330347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [dabbb0b4fd2574e92c3a53182c494ec08df151f99bd21d27189df149b7f8961c] <==
	2024-03-07 22:31:21.219064 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/03/07 22:31:21 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/03/07 22:31:21 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/07 22:31:21 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/07 22:31:21 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/07 22:31:21 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-07 22:31:21.804097 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-07 22:31:21.807948 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-07 22:31:21.808118 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-07 22:31:21.808231 I | etcdserver: published {Name:old-k8s-version-497253 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-07 22:31:21.808450 I | embed: ready to serve client requests
	2024-03-07 22:31:21.810286 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-07 22:31:21.812222 I | embed: ready to serve client requests
	2024-03-07 22:31:21.816811 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-07 22:31:40.813658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:31:50.581074 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:00.578784 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:10.578368 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:20.578621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:30.578623 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:40.578332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:32:50.578471 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:33:00.578434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:33:10.578534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-07 22:33:20.578488 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 22:39:51 up  1:22,  0 users,  load average: 2.03, 1.91, 2.39
	Linux old-k8s-version-497253 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [2543aa094e87373b5f846e8bdc2bbadbdda2c55f6d326c6db6b9216375dde5e6] <==
	I0307 22:37:49.932951       1 main.go:227] handling current node
	I0307 22:37:59.949388       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:37:59.949420       1 main.go:227] handling current node
	I0307 22:38:09.963513       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:38:09.963545       1 main.go:227] handling current node
	I0307 22:38:19.978061       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:38:19.978154       1 main.go:227] handling current node
	I0307 22:38:29.994459       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:38:29.994553       1 main.go:227] handling current node
	I0307 22:38:40.035937       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:38:40.036167       1 main.go:227] handling current node
	I0307 22:38:50.064914       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:38:50.064940       1 main.go:227] handling current node
	I0307 22:39:00.089742       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:00.089979       1 main.go:227] handling current node
	I0307 22:39:10.101752       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:10.101782       1 main.go:227] handling current node
	I0307 22:39:20.126768       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:20.127159       1 main.go:227] handling current node
	I0307 22:39:30.131882       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:30.131913       1 main.go:227] handling current node
	I0307 22:39:40.150512       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:40.150545       1 main.go:227] handling current node
	I0307 22:39:50.155211       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:39:50.155240       1 main.go:227] handling current node
	
	
	==> kindnet [e9768721745ab392d0ad14d4055c5cf01cc780491a01953b4b45bf77d2d3b062] <==
	I0307 22:31:49.521476       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0307 22:31:49.521576       1 main.go:116] setting mtu 1500 for CNI 
	I0307 22:31:49.521590       1 main.go:146] kindnetd IP family: "ipv4"
	I0307 22:31:49.521600       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0307 22:31:49.818048       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:31:49.818084       1 main.go:227] handling current node
	I0307 22:31:59.842544       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:31:59.842578       1 main.go:227] handling current node
	I0307 22:32:09.862427       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:09.862459       1 main.go:227] handling current node
	I0307 22:32:19.878608       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:19.878857       1 main.go:227] handling current node
	I0307 22:32:29.896563       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:29.896592       1 main.go:227] handling current node
	I0307 22:32:39.906023       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:39.906054       1 main.go:227] handling current node
	I0307 22:32:49.916848       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:49.916879       1 main.go:227] handling current node
	I0307 22:32:59.932381       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:32:59.932410       1 main.go:227] handling current node
	I0307 22:33:09.947125       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:33:09.947155       1 main.go:227] handling current node
	I0307 22:33:19.958833       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0307 22:33:19.958887       1 main.go:227] handling current node
	
	
	==> kube-apiserver [8b2ed0287cc5bbf594cda29d926ae828c7d6d12ad490285c6e855c8824388d29] <==
	I0307 22:36:35.837936       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:36:35.838030       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 22:37:08.966807       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 22:37:08.966893       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 22:37:08.966907       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 22:37:18.442429       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:37:18.442474       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:37:18.442483       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 22:37:55.292524       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:37:55.292573       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:37:55.292586       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 22:38:27.183951       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:38:27.184000       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:38:27.184009       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0307 22:39:06.182900       1 handler_proxy.go:102] no RequestInfo found in the context
	E0307 22:39:06.182972       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0307 22:39:06.182990       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0307 22:39:12.030008       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:39:12.030067       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:39:12.030076       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 22:39:46.045738       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:39:46.045781       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:39:46.045789       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [cc2db588ff857bfb95417bfc8b125ca64e6caba952ba4acd81ec27522fde3127] <==
	I0307 22:31:29.087614       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0307 22:31:29.087742       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0307 22:31:29.124126       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0307 22:31:29.132254       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0307 22:31:29.132320       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0307 22:31:29.575542       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0307 22:31:29.614450       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0307 22:31:29.765368       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0307 22:31:29.766573       1 controller.go:606] quota admission added evaluator for: endpoints
	I0307 22:31:29.770980       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0307 22:31:30.764704       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0307 22:31:31.326101       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0307 22:31:31.379181       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0307 22:31:39.826963       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0307 22:31:48.183010       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0307 22:31:48.310857       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0307 22:31:57.698930       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:31:57.699137       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:31:57.699206       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 22:32:37.792862       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:32:37.792908       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:32:37.792916       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0307 22:33:18.151658       1 client.go:360] parsed scheme: "passthrough"
	I0307 22:33:18.151928       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0307 22:33:18.151946       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [9ba0a9f80bc7b1866c7509e8df81dcc0d3e74b8702a84d7676660a9395cf59fc] <==
	W0307 22:35:30.571467       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:35:56.542599       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:36:02.222012       1 request.go:655] Throttling request took 1.048285567s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
	W0307 22:36:03.073522       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:36:27.050875       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:36:34.723933       1 request.go:655] Throttling request took 1.048325721s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 22:36:35.575650       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:36:57.552988       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:37:07.226187       1 request.go:655] Throttling request took 1.048242428s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 22:37:08.077702       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:37:28.055274       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:37:39.728368       1 request.go:655] Throttling request took 1.048475383s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0307 22:37:40.579770       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:37:58.557209       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:38:12.230291       1 request.go:655] Throttling request took 1.04828379s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 22:38:13.081711       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:38:29.058986       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:38:44.732237       1 request.go:655] Throttling request took 1.048479912s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 22:38:45.583795       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:38:59.560851       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:39:17.234378       1 request.go:655] Throttling request took 1.048577169s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0307 22:39:18.086335       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0307 22:39:30.062908       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0307 22:39:49.737002       1 request.go:655] Throttling request took 1.048222555s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0307 22:39:50.588909       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [bd7a60d8d4f26067debaaf5b2c23fe94dcfed76118b17104051bc8c755b35ce1] <==
	I0307 22:31:48.314241       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jjsjg"
	I0307 22:31:48.316223       1 range_allocator.go:373] Set node old-k8s-version-497253 PodCIDR to [10.244.0.0/24]
	I0307 22:31:48.361151       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E0307 22:31:48.458759       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0307 22:31:48.460947       1 shared_informer.go:247] Caches are synced for endpoint 
	I0307 22:31:48.461740       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wjf8t"
	I0307 22:31:48.463028       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-497253" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0307 22:31:48.489836       1 event.go:291] "Event occurred" object="kube-system/etcd-old-k8s-version-497253" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0307 22:31:48.496520       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-497253" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	E0307 22:31:48.510810       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0307 22:31:48.511274       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0307 22:31:48.513925       1 shared_informer.go:247] Caches are synced for resource quota 
	I0307 22:31:48.526388       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8s7l5"
	I0307 22:31:48.541273       1 shared_informer.go:247] Caches are synced for resource quota 
	E0307 22:31:48.621831       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"902430a6-f379-499b-913f-f3108b08012b", ResourceVersion:"263", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845447491, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001872500), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001872520)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001872540), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40012cfc40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001872
560), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001872580), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018725c0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40016ee480), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400169ae68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a6e540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000286b30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400169aeb8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0307 22:31:48.635533       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0307 22:31:48.659559       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"902430a6-f379-499b-913f-f3108b08012b", ResourceVersion:"411", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845447491, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001f18580), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001f185a0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001f185c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001f185e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001f18600), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001f0e4c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001f18620), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001f18640), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f18680)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d6fbc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001ef89f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000342ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f8a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001ef8a48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0307 22:31:48.860784       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 22:31:48.860810       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0307 22:31:48.935987       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0307 22:31:49.951566       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0307 22:31:49.973763       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-v8qft"
	I0307 22:31:53.262250       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0307 22:33:22.280813       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0307 22:33:22.438444       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [7998095a37dd2896f96215e8ab5d29ae495d3dc1a00b119506f8ace55156b07a] <==
	I0307 22:34:08.942252       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0307 22:34:08.942320       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0307 22:34:08.999961       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 22:34:09.000518       1 server_others.go:185] Using iptables Proxier.
	I0307 22:34:09.010380       1 server.go:650] Version: v1.20.0
	I0307 22:34:09.021099       1 config.go:315] Starting service config controller
	I0307 22:34:09.021126       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 22:34:09.021160       1 config.go:224] Starting endpoint slice config controller
	I0307 22:34:09.021164       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 22:34:09.121647       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0307 22:34:09.121713       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [f1df99d1a705a4556e4be13c16d400b99149438c96bfe6da863b8b041983b036] <==
	I0307 22:31:49.457770       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0307 22:31:49.457856       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0307 22:31:49.489685       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0307 22:31:49.489777       1 server_others.go:185] Using iptables Proxier.
	I0307 22:31:49.490010       1 server.go:650] Version: v1.20.0
	I0307 22:31:49.490568       1 config.go:315] Starting service config controller
	I0307 22:31:49.490584       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0307 22:31:49.492710       1 config.go:224] Starting endpoint slice config controller
	I0307 22:31:49.492720       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0307 22:31:49.590661       1 shared_informer.go:247] Caches are synced for service config 
	I0307 22:31:49.592838       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [157ece61e2d99f79b42530111727fc9798a3b91433c5d3a5cd83cefec75a6a67] <==
	I0307 22:34:00.206327       1 serving.go:331] Generated self-signed cert in-memory
	W0307 22:34:05.149298       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0307 22:34:05.149331       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 22:34:05.149353       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 22:34:05.149358       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 22:34:05.392928       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 22:34:05.405077       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 22:34:05.405098       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 22:34:05.405123       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0307 22:34:05.508413       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [2ca89421ca012456f503b4b92ac92a69bf6b0042461af4e2e3140e7adb8b27dc] <==
	W0307 22:31:28.239068       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0307 22:31:28.239076       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0307 22:31:28.239083       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0307 22:31:28.366324       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0307 22:31:28.368911       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 22:31:28.369074       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0307 22:31:28.369220       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0307 22:31:28.386542       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0307 22:31:28.393455       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0307 22:31:28.393547       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0307 22:31:28.395908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 22:31:28.396208       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0307 22:31:28.400224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 22:31:28.401886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0307 22:31:28.401967       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 22:31:28.402039       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 22:31:28.402124       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0307 22:31:28.402504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0307 22:31:28.408712       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0307 22:31:29.228992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0307 22:31:29.260086       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0307 22:31:29.272690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0307 22:31:29.352504       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0307 22:31:29.358003       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0307 22:31:29.969353       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: I0307 22:38:11.913564     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:38:11 old-k8s-version-497253 kubelet[660]: E0307 22:38:11.913955     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:38:23 old-k8s-version-497253 kubelet[660]: E0307 22:38:23.914295     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: I0307 22:38:26.913609     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:38:26 old-k8s-version-497253 kubelet[660]: E0307 22:38:26.914365     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:38:35 old-k8s-version-497253 kubelet[660]: E0307 22:38:35.914352     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: I0307 22:38:38.913663     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:38:38 old-k8s-version-497253 kubelet[660]: E0307 22:38:38.914978     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:38:48 old-k8s-version-497253 kubelet[660]: E0307 22:38:48.914308     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: I0307 22:38:50.913753     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:38:50 old-k8s-version-497253 kubelet[660]: E0307 22:38:50.914584     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: I0307 22:39:01.913784     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:39:01 old-k8s-version-497253 kubelet[660]: E0307 22:39:01.914578     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:39:03 old-k8s-version-497253 kubelet[660]: E0307 22:39:03.914427     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: I0307 22:39:15.913566     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:39:15 old-k8s-version-497253 kubelet[660]: E0307 22:39:15.913973     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:39:16 old-k8s-version-497253 kubelet[660]: E0307 22:39:16.918241     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: I0307 22:39:26.913685     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:39:26 old-k8s-version-497253 kubelet[660]: E0307 22:39:26.914096     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:39:29 old-k8s-version-497253 kubelet[660]: E0307 22:39:29.914280     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:39:37 old-k8s-version-497253 kubelet[660]: I0307 22:39:37.914696     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:39:37 old-k8s-version-497253 kubelet[660]: E0307 22:39:37.915177     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	Mar 07 22:39:42 old-k8s-version-497253 kubelet[660]: E0307 22:39:42.914820     660 pod_workers.go:191] Error syncing pod 1cd023e3-2966-4092-bcbd-4c5b6ca28aa3 ("metrics-server-9975d5f86-qmg9k_kube-system(1cd023e3-2966-4092-bcbd-4c5b6ca28aa3)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 07 22:39:49 old-k8s-version-497253 kubelet[660]: I0307 22:39:49.913585     660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4436cb630d957568995c4d713a0f34f178beecbf0a05f492026f2c211f72997b
	Mar 07 22:39:49 old-k8s-version-497253 kubelet[660]: E0307 22:39:49.913915     660 pod_workers.go:191] Error syncing pod 14312bc8-5525-434e-b6f7-56306d708544 ("dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-85hs8_kubernetes-dashboard(14312bc8-5525-434e-b6f7-56306d708544)"
	
	
	==> kubernetes-dashboard [778e16c451c2e6a7fecd1d1b5d37778e92f50fd757af7d8812222cfa025fdc86] <==
	2024/03/07 22:34:31 Using namespace: kubernetes-dashboard
	2024/03/07 22:34:31 Using in-cluster config to connect to apiserver
	2024/03/07 22:34:31 Using secret token for csrf signing
	2024/03/07 22:34:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/07 22:34:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/07 22:34:32 Successful initial request to the apiserver, version: v1.20.0
	2024/03/07 22:34:32 Generating JWE encryption key
	2024/03/07 22:34:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/07 22:34:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/07 22:34:32 Initializing JWE encryption key from synchronized object
	2024/03/07 22:34:32 Creating in-cluster Sidecar client
	2024/03/07 22:34:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:34:32 Serving insecurely on HTTP port: 9090
	2024/03/07 22:35:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:35:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:36:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:36:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:37:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:37:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:38:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:38:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:39:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:39:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/07 22:34:31 Starting overwatch
	
	
	==> storage-provisioner [426c41631cb95479682cbd7b19483427f710b2f7932b9e6d2bc0367710c2ca1d] <==
	I0307 22:34:51.048361       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0307 22:34:51.069541       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0307 22:34:51.069598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0307 22:35:08.542939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0307 22:35:08.543181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b5039696-c59b-48fe-a02f-8301223acda1", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-497253_43918d65-db2d-41ee-9e93-82b3f93a5562 became leader
	I0307 22:35:08.543920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-497253_43918d65-db2d-41ee-9e93-82b3f93a5562!
	I0307 22:35:08.644136       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-497253_43918d65-db2d-41ee-9e93-82b3f93a5562!
	
	
	==> storage-provisioner [f98ff695720f5ebea06fa188196a8a6393c29dfd0211d4a0ec784c54cbb9f906] <==
	I0307 22:34:09.009846       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0307 22:34:39.012880       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497253 -n old-k8s-version-497253
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-497253 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-qmg9k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-497253 describe pod metrics-server-9975d5f86-qmg9k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-497253 describe pod metrics-server-9975d5f86-qmg9k: exit status 1 (111.109909ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-qmg9k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-497253 describe pod metrics-server-9975d5f86-qmg9k: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (375.58s)

                                                
                                    

Test pass (296/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.98
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.15
9 TestDownloadOnly/v1.20.0/DeleteAll 0.33
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.28.4/json-events 9.48
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.2
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.29.0-rc.2/json-events 14.01
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.21
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.55
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 117.15
38 TestAddons/parallel/Registry 15.5
40 TestAddons/parallel/InspektorGadget 11.83
41 TestAddons/parallel/MetricsServer 5.84
45 TestAddons/parallel/Headlamp 12.52
46 TestAddons/parallel/CloudSpanner 6.62
47 TestAddons/parallel/LocalPath 51.31
48 TestAddons/parallel/NvidiaDevicePlugin 5.56
49 TestAddons/parallel/Yakd 5
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.32
54 TestCertOptions 38.32
55 TestCertExpiration 231.08
57 TestForceSystemdFlag 46.59
58 TestForceSystemdEnv 44.56
59 TestDockerEnvContainerd 47.25
64 TestErrorSpam/setup 30.94
65 TestErrorSpam/start 0.7
66 TestErrorSpam/status 0.94
67 TestErrorSpam/pause 1.72
68 TestErrorSpam/unpause 1.83
69 TestErrorSpam/stop 1.45
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 57.38
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.78
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.11
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.96
81 TestFunctional/serial/CacheCmd/cache/add_local 1.44
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.15
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 45.25
90 TestFunctional/serial/ComponentHealth 0.11
91 TestFunctional/serial/LogsCmd 1.59
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 3.7
95 TestFunctional/parallel/ConfigCmd 0.54
96 TestFunctional/parallel/DashboardCmd 10.61
97 TestFunctional/parallel/DryRun 1
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.22
103 TestFunctional/parallel/ServiceCmdConnect 10.71
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 25.48
107 TestFunctional/parallel/SSHCmd 0.68
108 TestFunctional/parallel/CpCmd 2.4
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 2.25
115 TestFunctional/parallel/NodeLabels 0.11
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
119 TestFunctional/parallel/License 0.3
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.47
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.28
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
133 TestFunctional/parallel/ProfileCmd/profile_list 0.37
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
135 TestFunctional/parallel/MountCmd/any-port 7.04
136 TestFunctional/parallel/ServiceCmd/List 0.49
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
139 TestFunctional/parallel/ServiceCmd/Format 0.57
140 TestFunctional/parallel/MountCmd/specific-port 2.47
141 TestFunctional/parallel/ServiceCmd/URL 0.39
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
143 TestFunctional/parallel/Version/short 0.07
144 TestFunctional/parallel/Version/components 1.21
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.36
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.73
150 TestFunctional/parallel/ImageCommands/Setup 2.46
154 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
155 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
156 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.01
167 TestMutliControlPlane/serial/StartCluster 136.34
168 TestMutliControlPlane/serial/DeployApp 18.8
169 TestMutliControlPlane/serial/PingHostFromPods 1.66
170 TestMutliControlPlane/serial/AddWorkerNode 25.46
171 TestMutliControlPlane/serial/NodeLabels 0.11
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.75
173 TestMutliControlPlane/serial/CopyFile 20.41
174 TestMutliControlPlane/serial/StopSecondaryNode 12.89
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
176 TestMutliControlPlane/serial/RestartSecondaryNode 17.95
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.79
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 143.57
179 TestMutliControlPlane/serial/DeleteSecondaryNode 11.34
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
181 TestMutliControlPlane/serial/StopCluster 35.95
182 TestMutliControlPlane/serial/RestartCluster 69.77
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.55
184 TestMutliControlPlane/serial/AddSecondaryNode 45.45
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
189 TestJSONOutput/start/Command 62.51
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.75
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.64
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.81
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 40.45
215 TestKicCustomNetwork/use_default_bridge_network 33.9
216 TestKicExistingNetwork 34.84
217 TestKicCustomSubnet 35.2
218 TestKicStaticIP 33.7
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 69.27
223 TestMountStart/serial/StartWithMountFirst 6.12
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 6.06
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.64
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.23
230 TestMountStart/serial/RestartStopped 7.4
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 79.32
235 TestMultiNode/serial/DeployApp2Nodes 6.01
236 TestMultiNode/serial/PingHostFrom2Pods 1.04
237 TestMultiNode/serial/AddNode 17.96
238 TestMultiNode/serial/MultiNodeLabels 0.09
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.25
241 TestMultiNode/serial/StopNode 2.26
242 TestMultiNode/serial/StartAfterStop 10.03
243 TestMultiNode/serial/RestartKeepsNodes 87.25
244 TestMultiNode/serial/DeleteNode 5.72
245 TestMultiNode/serial/StopMultiNode 23.93
246 TestMultiNode/serial/RestartMultiNode 55.29
247 TestMultiNode/serial/ValidateNameConflict 34
252 TestPreload 125.09
254 TestScheduledStopUnix 105.43
257 TestInsufficientStorage 10.62
258 TestRunningBinaryUpgrade 85.76
260 TestKubernetesUpgrade 385.42
261 TestMissingContainerUpgrade 169.81
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 39.69
265 TestNoKubernetes/serial/StartWithStopK8s 16.56
266 TestNoKubernetes/serial/Start 5.14
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
268 TestNoKubernetes/serial/ProfileList 0.88
269 TestNoKubernetes/serial/Stop 1.2
270 TestNoKubernetes/serial/StartNoArgs 6.74
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
272 TestStoppedBinaryUpgrade/Setup 1.24
273 TestStoppedBinaryUpgrade/Upgrade 102.53
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
283 TestPause/serial/Start 58.38
284 TestPause/serial/SecondStartNoReconfiguration 7.22
285 TestPause/serial/Pause 1.16
286 TestPause/serial/VerifyStatus 0.32
287 TestPause/serial/Unpause 0.8
288 TestPause/serial/PauseAgain 1.03
289 TestPause/serial/DeletePaused 3.19
290 TestPause/serial/VerifyDeletedResources 0.51
298 TestNetworkPlugins/group/false 5.58
303 TestStartStop/group/old-k8s-version/serial/FirstStart 145.12
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.55
307 TestStartStop/group/no-preload/serial/FirstStart 81.01
308 TestStartStop/group/old-k8s-version/serial/Stop 14.24
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
311 TestStartStop/group/no-preload/serial/DeployApp 8.36
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.3
313 TestStartStop/group/no-preload/serial/Stop 12.12
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/no-preload/serial/SecondStart 266.28
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
319 TestStartStop/group/no-preload/serial/Pause 3.2
321 TestStartStop/group/embed-certs/serial/FirstStart 67.65
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
325 TestStartStop/group/old-k8s-version/serial/Pause 3.85
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.56
328 TestStartStop/group/embed-certs/serial/DeployApp 8.45
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
330 TestStartStop/group/embed-certs/serial/Stop 12.08
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
333 TestStartStop/group/embed-certs/serial/SecondStart 268.38
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.56
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.43
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
337 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.98
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.13
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
341 TestStartStop/group/embed-certs/serial/Pause 3.47
343 TestStartStop/group/newest-cni/serial/FirstStart 48.26
344 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.98
348 TestNetworkPlugins/group/auto/Start 66.6
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.25
351 TestStartStop/group/newest-cni/serial/Stop 1.41
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.39
353 TestStartStop/group/newest-cni/serial/SecondStart 16.99
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.5
357 TestStartStop/group/newest-cni/serial/Pause 4.28
358 TestNetworkPlugins/group/kindnet/Start 59.1
359 TestNetworkPlugins/group/auto/KubeletFlags 0.39
360 TestNetworkPlugins/group/auto/NetCatPod 10.41
361 TestNetworkPlugins/group/auto/DNS 0.31
362 TestNetworkPlugins/group/auto/Localhost 0.29
363 TestNetworkPlugins/group/auto/HairPin 0.28
364 TestNetworkPlugins/group/calico/Start 76.71
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
367 TestNetworkPlugins/group/kindnet/NetCatPod 9.33
368 TestNetworkPlugins/group/kindnet/DNS 0.2
369 TestNetworkPlugins/group/kindnet/Localhost 0.18
370 TestNetworkPlugins/group/kindnet/HairPin 0.18
371 TestNetworkPlugins/group/custom-flannel/Start 68.91
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.41
374 TestNetworkPlugins/group/calico/NetCatPod 11.44
375 TestNetworkPlugins/group/calico/DNS 0.21
376 TestNetworkPlugins/group/calico/Localhost 0.19
377 TestNetworkPlugins/group/calico/HairPin 0.16
378 TestNetworkPlugins/group/enable-default-cni/Start 85.52
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.46
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
381 TestNetworkPlugins/group/custom-flannel/DNS 0.32
382 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
383 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
384 TestNetworkPlugins/group/flannel/Start 53.98
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
387 TestNetworkPlugins/group/flannel/ControllerPod 6.02
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
391 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
392 TestNetworkPlugins/group/flannel/NetCatPod 10.37
393 TestNetworkPlugins/group/flannel/DNS 0.26
394 TestNetworkPlugins/group/flannel/Localhost 0.19
395 TestNetworkPlugins/group/flannel/HairPin 0.26
396 TestNetworkPlugins/group/bridge/Start 54.48
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
398 TestNetworkPlugins/group/bridge/NetCatPod 10.27
399 TestNetworkPlugins/group/bridge/DNS 0.18
400 TestNetworkPlugins/group/bridge/Localhost 0.17
401 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (13.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-150781 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-150781 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.976750905s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-150781
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-150781: exit status 85 (147.564731ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-150781 | jenkins | v1.32.0 | 07 Mar 24 21:46 UTC |          |
	|         | -p download-only-150781        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 21:46:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 21:46:50.249322    7770 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:46:50.249441    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:46:50.249455    7770 out.go:304] Setting ErrFile to fd 2...
	I0307 21:46:50.249459    7770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:46:50.249691    7770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	W0307 21:46:50.249817    7770 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18320-2408/.minikube/config/config.json: open /home/jenkins/minikube-integration/18320-2408/.minikube/config/config.json: no such file or directory
	I0307 21:46:50.250206    7770 out.go:298] Setting JSON to true
	I0307 21:46:50.250958    7770 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1754,"bootTime":1709846257,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:46:50.251019    7770 start.go:139] virtualization:  
	I0307 21:46:50.254875    7770 out.go:97] [download-only-150781] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:46:50.257545    7770 out.go:169] MINIKUBE_LOCATION=18320
	W0307 21:46:50.255089    7770 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball: no such file or directory
	I0307 21:46:50.255131    7770 notify.go:220] Checking for updates...
	I0307 21:46:50.262013    7770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:46:50.264264    7770 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:46:50.266450    7770 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:46:50.268674    7770 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 21:46:50.272793    7770 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 21:46:50.273049    7770 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:46:50.293306    7770 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:46:50.293405    7770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:46:50.659569    7770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 21:46:50.650959839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:46:50.659672    7770 docker.go:295] overlay module found
	I0307 21:46:50.662352    7770 out.go:97] Using the docker driver based on user configuration
	I0307 21:46:50.662379    7770 start.go:297] selected driver: docker
	I0307 21:46:50.662387    7770 start.go:901] validating driver "docker" against <nil>
	I0307 21:46:50.662497    7770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:46:50.721762    7770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 21:46:50.713737649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:46:50.721931    7770 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 21:46:50.722217    7770 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 21:46:50.722409    7770 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 21:46:50.724978    7770 out.go:169] Using Docker driver with root privileges
	I0307 21:46:50.727912    7770 cni.go:84] Creating CNI manager for ""
	I0307 21:46:50.727937    7770 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:46:50.727947    7770 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 21:46:50.728029    7770 start.go:340] cluster config:
	{Name:download-only-150781 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-150781 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:46:50.730727    7770 out.go:97] Starting "download-only-150781" primary control-plane node in "download-only-150781" cluster
	I0307 21:46:50.730747    7770 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 21:46:50.733079    7770 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 21:46:50.733102    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 21:46:50.733298    7770 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 21:46:50.747304    7770 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:46:50.747492    7770 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 21:46:50.747594    7770 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:46:50.810223    7770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 21:46:50.810300    7770 cache.go:56] Caching tarball of preloaded images
	I0307 21:46:50.810472    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 21:46:50.813619    7770 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0307 21:46:50.813644    7770 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:46:50.937654    7770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0307 21:46:56.564244    7770 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 21:46:57.296487    7770 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:46:57.296620    7770 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:46:58.369340    7770 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0307 21:46:58.369729    7770 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/download-only-150781/config.json ...
	I0307 21:46:58.369762    7770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/download-only-150781/config.json: {Name:mkb94b8b86c86a749cafed027167c876c88eb2f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:46:58.369946    7770 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0307 21:46:58.370142    7770 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-150781 host does not exist
	  To start a cluster, run: "minikube start -p download-only-150781"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-150781
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (9.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-526545 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-526545 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.479460585s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (9.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-526545
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-526545: exit status 85 (71.23297ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-150781 | jenkins | v1.32.0 | 07 Mar 24 21:46 UTC |                     |
	|         | -p download-only-150781        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-150781        | download-only-150781 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | -o=json --download-only        | download-only-526545 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | -p download-only-526545        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 21:47:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 21:47:04.927882    7931 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:47:04.927994    7931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:04.928003    7931 out.go:304] Setting ErrFile to fd 2...
	I0307 21:47:04.928008    7931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:04.928237    7931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:47:04.928671    7931 out.go:298] Setting JSON to true
	I0307 21:47:04.929368    7931 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1768,"bootTime":1709846257,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:47:04.929430    7931 start.go:139] virtualization:  
	I0307 21:47:04.943727    7931 out.go:97] [download-only-526545] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:47:04.955334    7931 out.go:169] MINIKUBE_LOCATION=18320
	I0307 21:47:04.943961    7931 notify.go:220] Checking for updates...
	I0307 21:47:04.978491    7931 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:47:04.992069    7931 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:47:05.008908    7931 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:47:05.014028    7931 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 21:47:05.042679    7931 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 21:47:05.042974    7931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:47:05.063268    7931 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:47:05.063381    7931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:05.128671    7931 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:05.119007712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:05.128773    7931 docker.go:295] overlay module found
	I0307 21:47:05.133203    7931 out.go:97] Using the docker driver based on user configuration
	I0307 21:47:05.133231    7931 start.go:297] selected driver: docker
	I0307 21:47:05.133239    7931 start.go:901] validating driver "docker" against <nil>
	I0307 21:47:05.133372    7931 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:05.188999    7931 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:05.179946845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:05.189175    7931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 21:47:05.189461    7931 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 21:47:05.189619    7931 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 21:47:05.196423    7931 out.go:169] Using Docker driver with root privileges
	I0307 21:47:05.200601    7931 cni.go:84] Creating CNI manager for ""
	I0307 21:47:05.200624    7931 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:05.200634    7931 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 21:47:05.200725    7931 start.go:340] cluster config:
	{Name:download-only-526545 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-526545 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:47:05.216388    7931 out.go:97] Starting "download-only-526545" primary control-plane node in "download-only-526545" cluster
	I0307 21:47:05.216436    7931 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 21:47:05.237780    7931 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 21:47:05.237823    7931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:05.237909    7931 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 21:47:05.254217    7931 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:47:05.254349    7931 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 21:47:05.254373    7931 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 21:47:05.254378    7931 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 21:47:05.254386    7931 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 21:47:05.301333    7931 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0307 21:47:05.301359    7931 cache.go:56] Caching tarball of preloaded images
	I0307 21:47:05.301508    7931 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0307 21:47:05.303956    7931 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0307 21:47:05.303993    7931 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:47:05.414064    7931 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-526545 host does not exist
	  To start a cluster, run: "minikube start -p download-only-526545"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-526545
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (14.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-336944 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-336944 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.005235554s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (14.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-336944
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-336944: exit status 85 (81.561426ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-150781 | jenkins | v1.32.0 | 07 Mar 24 21:46 UTC |                     |
	|         | -p download-only-150781           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-150781           | download-only-150781 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | -o=json --download-only           | download-only-526545 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | -p download-only-526545           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| delete  | -p download-only-526545           | download-only-526545 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC | 07 Mar 24 21:47 UTC |
	| start   | -o=json --download-only           | download-only-336944 | jenkins | v1.32.0 | 07 Mar 24 21:47 UTC |                     |
	|         | -p download-only-336944           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/07 21:47:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0307 21:47:14.806122    8093 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:47:14.806306    8093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:14.806315    8093 out.go:304] Setting ErrFile to fd 2...
	I0307 21:47:14.806320    8093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:47:14.806610    8093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:47:14.807058    8093 out.go:298] Setting JSON to true
	I0307 21:47:14.807954    8093 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1778,"bootTime":1709846257,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:47:14.808028    8093 start.go:139] virtualization:  
	I0307 21:47:14.810505    8093 out.go:97] [download-only-336944] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:47:14.812854    8093 out.go:169] MINIKUBE_LOCATION=18320
	I0307 21:47:14.810793    8093 notify.go:220] Checking for updates...
	I0307 21:47:14.816778    8093 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:47:14.818881    8093 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:47:14.820804    8093 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:47:14.822951    8093 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0307 21:47:14.827282    8093 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0307 21:47:14.827558    8093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:47:14.847571    8093 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:47:14.847670    8093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:14.911675    8093 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:14.902591611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:14.911786    8093 docker.go:295] overlay module found
	I0307 21:47:14.913865    8093 out.go:97] Using the docker driver based on user configuration
	I0307 21:47:14.913899    8093 start.go:297] selected driver: docker
	I0307 21:47:14.913906    8093 start.go:901] validating driver "docker" against <nil>
	I0307 21:47:14.914024    8093 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:47:14.967230    8093 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-07 21:47:14.958369971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:47:14.967386    8093 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0307 21:47:14.967656    8093 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0307 21:47:14.967834    8093 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0307 21:47:14.970598    8093 out.go:169] Using Docker driver with root privileges
	I0307 21:47:14.973397    8093 cni.go:84] Creating CNI manager for ""
	I0307 21:47:14.973420    8093 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0307 21:47:14.973430    8093 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0307 21:47:14.973507    8093 start.go:340] cluster config:
	{Name:download-only-336944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-336944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0307 21:47:14.976584    8093 out.go:97] Starting "download-only-336944" primary control-plane node in "download-only-336944" cluster
	I0307 21:47:14.976604    8093 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0307 21:47:14.978623    8093 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0307 21:47:14.978645    8093 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 21:47:14.978803    8093 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0307 21:47:14.993630    8093 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0307 21:47:14.993775    8093 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0307 21:47:14.993797    8093 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0307 21:47:14.993810    8093 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0307 21:47:14.993818    8093 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0307 21:47:15.043250    8093 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0307 21:47:15.043298    8093 cache.go:56] Caching tarball of preloaded images
	I0307 21:47:15.043489    8093 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 21:47:15.045930    8093 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0307 21:47:15.045960    8093 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:47:15.154948    8093 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0307 21:47:20.661932    8093 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:47:20.662057    8093 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18320-2408/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0307 21:47:21.519623    8093 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0307 21:47:21.520003    8093 profile.go:142] Saving config to /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/download-only-336944/config.json ...
	I0307 21:47:21.520040    8093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/download-only-336944/config.json: {Name:mka267993a483ee5bdbba79fa46083d261fa3a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0307 21:47:21.520242    8093 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0307 21:47:21.520430    8093 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18320-2408/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-336944 host does not exist
	  To start a cluster, run: "minikube start -p download-only-336944"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-336944
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-880422 --alsologtostderr --binary-mirror http://127.0.0.1:39807 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-880422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-880422
--- PASS: TestBinaryMirror (0.55s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-963512
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-963512: exit status 85 (84.318518ms)

                                                
                                                
-- stdout --
	* Profile "addons-963512" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963512"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-963512
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-963512: exit status 85 (87.475903ms)

                                                
                                                
-- stdout --
	* Profile "addons-963512" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-963512"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (117.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-963512 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-963512 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m57.151866765s)
--- PASS: TestAddons/Setup (117.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 42.885107ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-g5s9h" [5acdf3e9-cdb0-4e0e-82bc-ce557a05d53f] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004548706s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pn5c9" [0bc81ee0-ca94-4da6-9aa2-210e4709466d] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004414026s
addons_test.go:340: (dbg) Run:  kubectl --context addons-963512 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-963512 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-963512 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.360060542s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 ip
2024/03/07 21:49:42 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gcmdq" [d9cadff4-e84f-4ea3-a8fe-87c5b793ae91] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005133776s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-963512
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-963512: (5.823059171s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.66921ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-qzrq2" [026e1c6f-54a2-4f1b-83ab-9b6f24976fe3] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014482387s
addons_test.go:415: (dbg) Run:  kubectl --context addons-963512 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-963512 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-963512 --alsologtostderr -v=1: (1.519185113s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-rs5fz" [5c789064-e146-4b0c-951e-c1074bf915d8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-rs5fz" [5c789064-e146-4b0c-951e-c1074bf915d8] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003447715s
--- PASS: TestAddons/parallel/Headlamp (12.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-c4d87" [338d31d1-c07f-4bcb-9315-37c7cc8f59ba] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004619887s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-963512
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-963512 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-963512 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-963512 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a7b25d4e-629c-4fb4-815c-24277ce62fd7] Pending
helpers_test.go:344: "test-local-path" [a7b25d4e-629c-4fb4-815c-24277ce62fd7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a7b25d4e-629c-4fb4-815c-24277ce62fd7] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a7b25d4e-629c-4fb4-815c-24277ce62fd7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004443376s
addons_test.go:891: (dbg) Run:  kubectl --context addons-963512 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 ssh "cat /opt/local-path-provisioner/pvc-744a55dd-baca-49c6-99ea-51a2d0a7df42_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-963512 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-963512 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-963512 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-963512 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.189373352s)
--- PASS: TestAddons/parallel/LocalPath (51.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-skr6t" [bd96c754-234b-4da9-a225-d2510af33519] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004723079s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-963512
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-dw8s7" [2c078f76-1b81-4c46-8f18-a61e2d226852] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003662759s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-963512 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-963512 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-963512
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-963512: (12.03000794s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-963512
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-963512
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-963512
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (38.32s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-781550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-781550 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.723136749s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-781550 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-781550 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-781550 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-781550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-781550
E0307 22:30:45.512960    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-781550: (1.982473798s)
--- PASS: TestCertOptions (38.32s)

                                                
                                    
x
+
TestCertExpiration (231.08s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-193013 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-193013 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.572714498s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-193013 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-193013 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.120419676s)
helpers_test.go:175: Cleaning up "cert-expiration-193013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-193013
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-193013: (2.387970073s)
--- PASS: TestCertExpiration (231.08s)

                                                
                                    
x
+
TestForceSystemdFlag (46.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-026071 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-026071 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.65110579s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-026071 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-026071" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-026071
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-026071: (2.58141849s)
--- PASS: TestForceSystemdFlag (46.59s)

                                                
                                    
x
+
TestForceSystemdEnv (44.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-344179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0307 22:29:28.003975    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-344179 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.875751265s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-344179 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-344179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-344179
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-344179: (2.246992181s)
--- PASS: TestForceSystemdEnv (44.56s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.25s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-978342 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-978342 --driver=docker  --container-runtime=containerd: (31.204149847s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-978342"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-978342": (1.42082119s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ycVIJsXTZXnf/agent.24965" SSH_AGENT_PID="24966" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ycVIJsXTZXnf/agent.24965" SSH_AGENT_PID="24966" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ycVIJsXTZXnf/agent.24965" SSH_AGENT_PID="24966" DOCKER_HOST=ssh://docker@127.0.0.1:32777 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.353941538s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ycVIJsXTZXnf/agent.24965" SSH_AGENT_PID="24966" DOCKER_HOST=ssh://docker@127.0.0.1:32777 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-978342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-978342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-978342: (1.950273392s)
--- PASS: TestDockerEnvContainerd (47.25s)

                                                
                                    
x
+
TestErrorSpam/setup (30.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-518025 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-518025 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-518025 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-518025 --driver=docker  --container-runtime=containerd: (30.940923697s)
--- PASS: TestErrorSpam/setup (30.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 stop: (1.237572066s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-518025 --log_dir /tmp/nospam-518025 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18320-2408/.minikube/files/etc/test/nested/copy/7764/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0307 21:54:28.005807    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.011421    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.021668    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.041958    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.082262    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.162540    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.322862    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:28.643448    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:29.284385    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:30.564780    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 21:54:33.125199    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-894723 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (57.38130635s)
--- PASS: TestFunctional/serial/StartWithProxy (57.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --alsologtostderr -v=8
E0307 21:54:38.246263    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-894723 --alsologtostderr -v=8: (6.773618922s)
functional_test.go:659: soft start took 6.777992723s for "functional-894723" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-894723 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:3.1: (1.420486297s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:3.3: (1.35453511s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 cache add registry.k8s.io/pause:latest: (1.183328064s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-894723 /tmp/TestFunctionalserialCacheCmdcacheadd_local1495226854/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache add minikube-local-cache-test:functional-894723
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache delete minikube-local-cache-test:functional-894723
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-894723
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh sudo crictl images
E0307 21:54:48.486533    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (310.139483ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 cache reload: (1.254296755s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 kubectl -- --context functional-894723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-894723 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0307 21:55:08.966797    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-894723 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.248505695s)
functional_test.go:757: restart took 45.248614051s for "functional-894723" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-894723 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 logs: (1.592744566s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 logs --file /tmp/TestFunctionalserialLogsFileCmd3513043362/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 logs --file /tmp/TestFunctionalserialLogsFileCmd3513043362/001/logs.txt: (1.543285898s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-894723 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-894723
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-894723: exit status 115 (362.523639ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31879 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-894723 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 config get cpus: exit status 14 (97.90246ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 config get cpus: exit status 14 (86.370837ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-894723 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-894723 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 39588: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-894723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (341.657051ms)

                                                
                                                
-- stdout --
	* [functional-894723] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 21:56:22.776217   38930 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:56:22.776380   38930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:22.776391   38930 out.go:304] Setting ErrFile to fd 2...
	I0307 21:56:22.776396   38930 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:22.776647   38930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:56:22.776997   38930 out.go:298] Setting JSON to false
	I0307 21:56:22.777875   38930 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2326,"bootTime":1709846257,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:56:22.777940   38930 start.go:139] virtualization:  
	I0307 21:56:22.780462   38930 out.go:177] * [functional-894723] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 21:56:22.782875   38930 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 21:56:22.784940   38930 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:56:22.782947   38930 notify.go:220] Checking for updates...
	I0307 21:56:22.789395   38930 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:56:22.792965   38930 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:56:22.794949   38930 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 21:56:22.796930   38930 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 21:56:22.799116   38930 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:56:22.800145   38930 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:56:22.876671   38930 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:56:22.876796   38930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:56:23.043536   38930 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:58 SystemTime:2024-03-07 21:56:23.030952093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:56:23.043643   38930 docker.go:295] overlay module found
	I0307 21:56:23.046043   38930 out.go:177] * Using the docker driver based on existing profile
	I0307 21:56:23.047880   38930 start.go:297] selected driver: docker
	I0307 21:56:23.047896   38930 start.go:901] validating driver "docker" against &{Name:functional-894723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-894723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:56:23.047996   38930 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 21:56:23.050672   38930 out.go:177] 
	W0307 21:56:23.052990   38930 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0307 21:56:23.055137   38930 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-894723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-894723 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (246.38023ms)

                                                
                                                
-- stdout --
	* [functional-894723] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 21:56:22.563034   38865 out.go:291] Setting OutFile to fd 1 ...
	I0307 21:56:22.563258   38865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:22.563303   38865 out.go:304] Setting ErrFile to fd 2...
	I0307 21:56:22.563329   38865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 21:56:22.563800   38865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 21:56:22.565093   38865 out.go:298] Setting JSON to false
	I0307 21:56:22.568616   38865 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2326,"bootTime":1709846257,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 21:56:22.568711   38865 start.go:139] virtualization:  
	I0307 21:56:22.571332   38865 out.go:177] * [functional-894723] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0307 21:56:22.573665   38865 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 21:56:22.575493   38865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 21:56:22.573811   38865 notify.go:220] Checking for updates...
	I0307 21:56:22.580725   38865 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 21:56:22.582994   38865 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 21:56:22.584921   38865 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 21:56:22.587103   38865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 21:56:22.589367   38865 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 21:56:22.589979   38865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 21:56:22.621311   38865 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 21:56:22.621425   38865 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 21:56:22.700740   38865 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-07 21:56:22.686257661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 21:56:22.700882   38865 docker.go:295] overlay module found
	I0307 21:56:22.704155   38865 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0307 21:56:22.706314   38865 start.go:297] selected driver: docker
	I0307 21:56:22.706338   38865 start.go:901] validating driver "docker" against &{Name:functional-894723 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-894723 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0307 21:56:22.706455   38865 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 21:56:22.708755   38865 out.go:177] 
	W0307 21:56:22.710582   38865 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0307 21:56:22.712492   38865 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-894723 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-894723 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cbzn7" [8f3f631a-545b-4cde-adf9-a4ae147cb505] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cbzn7" [8f3f631a-545b-4cde-adf9-a4ae147cb505] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.016581953s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30572
functional_test.go:1671: http://192.168.49.2:30572: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-cbzn7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30572
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [039eeda5-6a4e-4b7b-8e4f-5793493e89ca] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00500634s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-894723 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-894723 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-894723 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-894723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [047640e7-e7a5-40d7-847e-302e4186c1fc] Pending
helpers_test.go:344: "sp-pod" [047640e7-e7a5-40d7-847e-302e4186c1fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [047640e7-e7a5-40d7-847e-302e4186c1fc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004235581s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-894723 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-894723 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-894723 delete -f testdata/storage-provisioner/pod.yaml: (1.199407973s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-894723 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dc5f39a6-3067-4db1-a9c9-97bb26a3e4d3] Pending
helpers_test.go:344: "sp-pod" [dc5f39a6-3067-4db1-a9c9-97bb26a3e4d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004229192s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-894723 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh -n functional-894723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cp functional-894723:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd991923002/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh -n functional-894723 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh -n functional-894723 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7764/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /etc/test/nested/copy/7764/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7764.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /etc/ssl/certs/7764.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7764.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /usr/share/ca-certificates/7764.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/77642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /etc/ssl/certs/77642.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/77642.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /usr/share/ca-certificates/77642.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-894723 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh "sudo systemctl is-active docker": exit status 1 (344.297872ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh "sudo systemctl is-active crio": exit status 1 (360.752157ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 36256: os: process already finished
helpers_test.go:502: unable to terminate pid 36073: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-894723 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f91a95e2-b635-4530-98ed-c929a6ba1c69] Pending
helpers_test.go:344: "nginx-svc" [f91a95e2-b635-4530-98ed-c929a6ba1c69] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
E0307 21:55:49.927042    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
helpers_test.go:344: "nginx-svc" [f91a95e2-b635-4530-98ed-c929a6ba1c69] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f91a95e2-b635-4530-98ed-c929a6ba1c69] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.003939215s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-894723 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.101.51 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-894723 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-894723 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-894723 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-kdlhz" [dc5af884-359b-4820-974c-f85655e2772e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-kdlhz" [dc5af884-359b-4820-974c-f85655e2772e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003224493s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "313.001348ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "60.398189ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "344.730716ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "67.474486ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdany-port2370445667/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1709848573286885923" to /tmp/TestFunctionalparallelMountCmdany-port2370445667/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1709848573286885923" to /tmp/TestFunctionalparallelMountCmdany-port2370445667/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1709848573286885923" to /tmp/TestFunctionalparallelMountCmdany-port2370445667/001/test-1709848573286885923
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (366.132116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar  7 21:56 created-by-test
-rw-r--r-- 1 docker docker 24 Mar  7 21:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar  7 21:56 test-1709848573286885923
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh cat /mount-9p/test-1709848573286885923
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-894723 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5af5f02e-dd3d-49da-8d5e-abb6e2beee84] Pending
helpers_test.go:344: "busybox-mount" [5af5f02e-dd3d-49da-8d5e-abb6e2beee84] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5af5f02e-dd3d-49da-8d5e-abb6e2beee84] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5af5f02e-dd3d-49da-8d5e-abb6e2beee84] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004140134s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-894723 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdany-port2370445667/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service list -o json
functional_test.go:1490: Took "609.124054ms" to run "out/minikube-linux-arm64 -p functional-894723 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31124
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdspecific-port2054558629/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (510.585729ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdspecific-port2054558629/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh "sudo umount -f /mount-9p": exit status 1 (389.290284ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-894723 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdspecific-port2054558629/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31124
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-894723 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-894723 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2848778011/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 version -o=json --components: (1.21432778s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-894723 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-894723
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-894723 image ls --format short --alsologtostderr:
I0307 21:56:44.482152   40787 out.go:291] Setting OutFile to fd 1 ...
I0307 21:56:44.482344   40787 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.482358   40787 out.go:304] Setting ErrFile to fd 2...
I0307 21:56:44.482364   40787 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.482691   40787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
I0307 21:56:44.483457   40787 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.483649   40787 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.484243   40787 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
I0307 21:56:44.531106   40787 ssh_runner.go:195] Run: systemctl --version
I0307 21:56:44.531167   40787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
I0307 21:56:44.551733   40787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
I0307 21:56:44.644805   40787 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-894723 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
| docker.io/library/nginx                     | latest             | sha256:760b7c | 67.2MB |
| docker.io/library/minikube-local-cache-test | functional-894723  | sha256:364465 | 1.01kB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-894723 image ls --format table --alsologtostderr:
I0307 21:56:44.853137   40846 out.go:291] Setting OutFile to fd 1 ...
I0307 21:56:44.853313   40846 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.853323   40846 out.go:304] Setting ErrFile to fd 2...
I0307 21:56:44.853328   40846 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.853566   40846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
I0307 21:56:44.854178   40846 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.854299   40846 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.854765   40846 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
I0307 21:56:44.872842   40846 ssh_runner.go:195] Run: systemctl --version
I0307 21:56:44.872898   40846 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
I0307 21:56:44.894303   40846 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
I0307 21:56:44.990303   40846 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-894723 image ls --format json --alsologtostderr:
[{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488
afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha
256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:3644657d7cf6cc0931a26423d5fe2b1360c41a6c486e8018221ad08574af599a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-894723"],"size":"1006"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","
repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha25
6:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216905"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-894723 image ls --format json --alsologtostderr:
I0307 21:56:44.784972   40835 out.go:291] Setting OutFile to fd 1 ...
I0307 21:56:44.785145   40835 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.785156   40835 out.go:304] Setting ErrFile to fd 2...
I0307 21:56:44.785162   40835 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.785398   40835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
I0307 21:56:44.786012   40835 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.786144   40835 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.786664   40835 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
I0307 21:56:44.814135   40835 ssh_runner.go:195] Run: systemctl --version
I0307 21:56:44.814202   40835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
I0307 21:56:44.839834   40835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
I0307 21:56:44.937548   40835 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-894723 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "67216905"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:3644657d7cf6cc0931a26423d5fe2b1360c41a6c486e8018221ad08574af599a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-894723
size: "1006"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-894723 image ls --format yaml --alsologtostderr:
I0307 21:56:44.538069   40788 out.go:291] Setting OutFile to fd 1 ...
I0307 21:56:44.538167   40788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.538171   40788 out.go:304] Setting ErrFile to fd 2...
I0307 21:56:44.538176   40788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:44.538413   40788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
I0307 21:56:44.538995   40788 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.539131   40788 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:44.539586   40788 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
I0307 21:56:44.567475   40788 ssh_runner.go:195] Run: systemctl --version
I0307 21:56:44.567530   40788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
I0307 21:56:44.591920   40788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
I0307 21:56:44.688688   40788 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-894723 ssh pgrep buildkitd: exit status 1 (373.940781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image build -t localhost/my-image:functional-894723 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-894723 image build -t localhost/my-image:functional-894723 testdata/build --alsologtostderr: (2.122695439s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-894723 image build -t localhost/my-image:functional-894723 testdata/build --alsologtostderr:
I0307 21:56:45.425297   40944 out.go:291] Setting OutFile to fd 1 ...
I0307 21:56:45.425596   40944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:45.425631   40944 out.go:304] Setting ErrFile to fd 2...
I0307 21:56:45.425653   40944 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0307 21:56:45.425920   40944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
I0307 21:56:45.426707   40944 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:45.428207   40944 config.go:182] Loaded profile config "functional-894723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0307 21:56:45.428754   40944 cli_runner.go:164] Run: docker container inspect functional-894723 --format={{.State.Status}}
I0307 21:56:45.446450   40944 ssh_runner.go:195] Run: systemctl --version
I0307 21:56:45.446507   40944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-894723
I0307 21:56:45.462206   40944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/functional-894723/id_rsa Username:docker}
I0307 21:56:45.552941   40944 build_images.go:151] Building image from path: /tmp/build.4046594900.tar
I0307 21:56:45.553016   40944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0307 21:56:45.561765   40944 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4046594900.tar
I0307 21:56:45.565367   40944 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4046594900.tar: stat -c "%s %y" /var/lib/minikube/build/build.4046594900.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4046594900.tar': No such file or directory
I0307 21:56:45.565399   40944 ssh_runner.go:362] scp /tmp/build.4046594900.tar --> /var/lib/minikube/build/build.4046594900.tar (3072 bytes)
I0307 21:56:45.589782   40944 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4046594900
I0307 21:56:45.598742   40944 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4046594900 -xf /var/lib/minikube/build/build.4046594900.tar
I0307 21:56:45.607514   40944 containerd.go:379] Building image: /var/lib/minikube/build/build.4046594900
I0307 21:56:45.607595   40944 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4046594900 --local dockerfile=/var/lib/minikube/build/build.4046594900 --output type=image,name=localhost/my-image:functional-894723
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ace4a9e9ef3be6bd011b2318427c9d2ae2d4afe3f3b17277f167cdfcce341af3 0.0s done
#8 exporting config sha256:5b59bf201b78cca1ac5cad0f6ee7a57a80fe61b987e118e5abef373659fccbd1 0.0s done
#8 naming to localhost/my-image:functional-894723 done
#8 DONE 0.2s
I0307 21:56:47.453340   40944 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4046594900 --local dockerfile=/var/lib/minikube/build/build.4046594900 --output type=image,name=localhost/my-image:functional-894723: (1.845698383s)
I0307 21:56:47.453416   40944 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4046594900
I0307 21:56:47.463020   40944 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4046594900.tar
I0307 21:56:47.472455   40944 build_images.go:207] Built localhost/my-image:functional-894723 from /tmp/build.4046594900.tar
I0307 21:56:47.472528   40944 build_images.go:123] succeeded building to: functional-894723
I0307 21:56:47.472548   40944 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.432857069s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-894723
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image rm gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-894723
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-894723 image save --daemon gcr.io/google-containers/addon-resizer:functional-894723 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-894723
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-894723
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-894723
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-894723
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (136.34s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191523 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 21:57:11.848396    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-191523 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m15.469748933s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (136.34s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (18.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-191523 -- rollout status deployment/busybox: (15.462610451s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-g766k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-sjdm8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-xd9v8 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-g766k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-sjdm8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-xd9v8 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-g766k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-sjdm8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-xd9v8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (18.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-g766k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-g766k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-sjdm8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-sjdm8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-xd9v8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-191523 -- exec busybox-5b5d89c9d6-xd9v8 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.66s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (25.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-191523 -v=7 --alsologtostderr
E0307 21:59:28.004358    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-191523 -v=7 --alsologtostderr: (24.412812578s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr: (1.043243997s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (25.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-191523 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (20.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp testdata/cp-test.txt ha-191523:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2751486529/001/cp-test_ha-191523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test.txt"
E0307 21:59:55.689508    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523:/home/docker/cp-test.txt ha-191523-m02:/home/docker/cp-test_ha-191523_ha-191523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test_ha-191523_ha-191523-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523:/home/docker/cp-test.txt ha-191523-m03:/home/docker/cp-test_ha-191523_ha-191523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test_ha-191523_ha-191523-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523:/home/docker/cp-test.txt ha-191523-m04:/home/docker/cp-test_ha-191523_ha-191523-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test_ha-191523_ha-191523-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp testdata/cp-test.txt ha-191523-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2751486529/001/cp-test_ha-191523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m02:/home/docker/cp-test.txt ha-191523:/home/docker/cp-test_ha-191523-m02_ha-191523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test_ha-191523-m02_ha-191523.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m02:/home/docker/cp-test.txt ha-191523-m03:/home/docker/cp-test_ha-191523-m02_ha-191523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test_ha-191523-m02_ha-191523-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m02:/home/docker/cp-test.txt ha-191523-m04:/home/docker/cp-test_ha-191523-m02_ha-191523-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test_ha-191523-m02_ha-191523-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp testdata/cp-test.txt ha-191523-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2751486529/001/cp-test_ha-191523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m03:/home/docker/cp-test.txt ha-191523:/home/docker/cp-test_ha-191523-m03_ha-191523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test_ha-191523-m03_ha-191523.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m03:/home/docker/cp-test.txt ha-191523-m02:/home/docker/cp-test_ha-191523-m03_ha-191523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test_ha-191523-m03_ha-191523-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m03:/home/docker/cp-test.txt ha-191523-m04:/home/docker/cp-test_ha-191523-m03_ha-191523-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test_ha-191523-m03_ha-191523-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp testdata/cp-test.txt ha-191523-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile2751486529/001/cp-test_ha-191523-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m04:/home/docker/cp-test.txt ha-191523:/home/docker/cp-test_ha-191523-m04_ha-191523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523 "sudo cat /home/docker/cp-test_ha-191523-m04_ha-191523.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m04:/home/docker/cp-test.txt ha-191523-m02:/home/docker/cp-test_ha-191523-m04_ha-191523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m02 "sudo cat /home/docker/cp-test_ha-191523-m04_ha-191523-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 cp ha-191523-m04:/home/docker/cp-test.txt ha-191523-m03:/home/docker/cp-test_ha-191523-m04_ha-191523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 ssh -n ha-191523-m03 "sudo cat /home/docker/cp-test_ha-191523-m04_ha-191523-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (20.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 node stop m02 -v=7 --alsologtostderr: (12.150119118s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr: exit status 7 (739.984245ms)

                                                
                                                
-- stdout --
	ha-191523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-191523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191523-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-191523-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:00:26.225259   56207 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:00:26.225427   56207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:00:26.225439   56207 out.go:304] Setting ErrFile to fd 2...
	I0307 22:00:26.225445   56207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:00:26.225677   56207 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:00:26.225871   56207 out.go:298] Setting JSON to false
	I0307 22:00:26.225957   56207 mustload.go:65] Loading cluster: ha-191523
	I0307 22:00:26.226019   56207 notify.go:220] Checking for updates...
	I0307 22:00:26.226401   56207 config.go:182] Loaded profile config "ha-191523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 22:00:26.226493   56207 status.go:255] checking status of ha-191523 ...
	I0307 22:00:26.227081   56207 cli_runner.go:164] Run: docker container inspect ha-191523 --format={{.State.Status}}
	I0307 22:00:26.244891   56207 status.go:330] ha-191523 host status = "Running" (err=<nil>)
	I0307 22:00:26.244917   56207 host.go:66] Checking if "ha-191523" exists ...
	I0307 22:00:26.245232   56207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191523
	I0307 22:00:26.262835   56207 host.go:66] Checking if "ha-191523" exists ...
	I0307 22:00:26.263142   56207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:00:26.263196   56207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191523
	I0307 22:00:26.292642   56207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/ha-191523/id_rsa Username:docker}
	I0307 22:00:26.385637   56207 ssh_runner.go:195] Run: systemctl --version
	I0307 22:00:26.390242   56207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:00:26.401549   56207 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:00:26.470730   56207 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-07 22:00:26.460775319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:00:26.471369   56207 kubeconfig.go:125] found "ha-191523" server: "https://192.168.49.254:8443"
	I0307 22:00:26.471390   56207 api_server.go:166] Checking apiserver status ...
	I0307 22:00:26.471440   56207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:00:26.487240   56207 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1438/cgroup
	I0307 22:00:26.499128   56207 api_server.go:182] apiserver freezer: "5:freezer:/docker/ecebf3eaf3b8f437e9aa6841f875c304de8428144c37b48656eb6e049d75904e/kubepods/burstable/pod2d699c9095573f4190b7d84e4c9a66ab/1718a254c52abf16424cecc871240e3de3414ef11ea3481cdac928377f923de4"
	I0307 22:00:26.499201   56207 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ecebf3eaf3b8f437e9aa6841f875c304de8428144c37b48656eb6e049d75904e/kubepods/burstable/pod2d699c9095573f4190b7d84e4c9a66ab/1718a254c52abf16424cecc871240e3de3414ef11ea3481cdac928377f923de4/freezer.state
	I0307 22:00:26.508021   56207 api_server.go:204] freezer state: "THAWED"
	I0307 22:00:26.508049   56207 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 22:00:26.516838   56207 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 22:00:26.516864   56207 status.go:422] ha-191523 apiserver status = Running (err=<nil>)
	I0307 22:00:26.516876   56207 status.go:257] ha-191523 status: &{Name:ha-191523 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:00:26.516904   56207 status.go:255] checking status of ha-191523-m02 ...
	I0307 22:00:26.517226   56207 cli_runner.go:164] Run: docker container inspect ha-191523-m02 --format={{.State.Status}}
	I0307 22:00:26.534668   56207 status.go:330] ha-191523-m02 host status = "Stopped" (err=<nil>)
	I0307 22:00:26.534701   56207 status.go:343] host is not running, skipping remaining checks
	I0307 22:00:26.534709   56207 status.go:257] ha-191523-m02 status: &{Name:ha-191523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:00:26.534729   56207 status.go:255] checking status of ha-191523-m03 ...
	I0307 22:00:26.535088   56207 cli_runner.go:164] Run: docker container inspect ha-191523-m03 --format={{.State.Status}}
	I0307 22:00:26.553564   56207 status.go:330] ha-191523-m03 host status = "Running" (err=<nil>)
	I0307 22:00:26.553591   56207 host.go:66] Checking if "ha-191523-m03" exists ...
	I0307 22:00:26.553903   56207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191523-m03
	I0307 22:00:26.570740   56207 host.go:66] Checking if "ha-191523-m03" exists ...
	I0307 22:00:26.571169   56207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:00:26.571245   56207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191523-m03
	I0307 22:00:26.588368   56207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/ha-191523-m03/id_rsa Username:docker}
	I0307 22:00:26.677199   56207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:00:26.691310   56207 kubeconfig.go:125] found "ha-191523" server: "https://192.168.49.254:8443"
	I0307 22:00:26.691339   56207 api_server.go:166] Checking apiserver status ...
	I0307 22:00:26.691377   56207 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:00:26.706155   56207 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	I0307 22:00:26.716856   56207 api_server.go:182] apiserver freezer: "5:freezer:/docker/3284b4a5f641d4eaeb33b87c78d37ff481ce568f2f98c85a4f0d23139545dd95/kubepods/burstable/podb3196c2f6b3cce03bb84a3caf23a0c4b/8bcd6f0ca49c2dbcf4bd1056cb0a40cf8869a3454963880a6e9cea42702162c8"
	I0307 22:00:26.716950   56207 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3284b4a5f641d4eaeb33b87c78d37ff481ce568f2f98c85a4f0d23139545dd95/kubepods/burstable/podb3196c2f6b3cce03bb84a3caf23a0c4b/8bcd6f0ca49c2dbcf4bd1056cb0a40cf8869a3454963880a6e9cea42702162c8/freezer.state
	I0307 22:00:26.726751   56207 api_server.go:204] freezer state: "THAWED"
	I0307 22:00:26.726779   56207 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0307 22:00:26.735321   56207 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0307 22:00:26.735347   56207 status.go:422] ha-191523-m03 apiserver status = Running (err=<nil>)
	I0307 22:00:26.735357   56207 status.go:257] ha-191523-m03 status: &{Name:ha-191523-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:00:26.735373   56207 status.go:255] checking status of ha-191523-m04 ...
	I0307 22:00:26.735701   56207 cli_runner.go:164] Run: docker container inspect ha-191523-m04 --format={{.State.Status}}
	I0307 22:00:26.753548   56207 status.go:330] ha-191523-m04 host status = "Running" (err=<nil>)
	I0307 22:00:26.753574   56207 host.go:66] Checking if "ha-191523-m04" exists ...
	I0307 22:00:26.753885   56207 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-191523-m04
	I0307 22:00:26.771480   56207 host.go:66] Checking if "ha-191523-m04" exists ...
	I0307 22:00:26.771780   56207 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:00:26.771840   56207 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-191523-m04
	I0307 22:00:26.787818   56207 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/ha-191523-m04/id_rsa Username:docker}
	I0307 22:00:26.884579   56207 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:00:26.896646   56207 status.go:257] ha-191523-m04 status: &{Name:ha-191523-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 node start m02 -v=7 --alsologtostderr: (16.602173884s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr: (1.232974826s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0307 22:00:45.512607    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.518032    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.529305    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.549515    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.590062    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.671560    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:45.831857    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:46.152395    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.79s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (143.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-191523 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-191523 -v=7 --alsologtostderr
E0307 22:00:46.793110    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:48.073491    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:50.633711    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:00:55.754782    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:01:05.994954    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-191523 -v=7 --alsologtostderr: (37.332657919s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191523 --wait=true -v=7 --alsologtostderr
E0307 22:01:26.475320    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:02:07.436388    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-191523 --wait=true -v=7 --alsologtostderr: (1m46.06307558s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-191523
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (143.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 node delete m03 -v=7 --alsologtostderr: (10.318795767s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (11.34s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.95s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 stop -v=7 --alsologtostderr
E0307 22:03:29.357240    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 stop -v=7 --alsologtostderr: (35.842971336s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr: exit status 7 (106.596731ms)

                                                
                                                
-- stdout --
	ha-191523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191523-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-191523-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:03:57.540498   69714 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:03:57.540634   69714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:03:57.540643   69714 out.go:304] Setting ErrFile to fd 2...
	I0307 22:03:57.540649   69714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:03:57.540904   69714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:03:57.541082   69714 out.go:298] Setting JSON to false
	I0307 22:03:57.541116   69714 mustload.go:65] Loading cluster: ha-191523
	I0307 22:03:57.541229   69714 notify.go:220] Checking for updates...
	I0307 22:03:57.541532   69714 config.go:182] Loaded profile config "ha-191523": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 22:03:57.541542   69714 status.go:255] checking status of ha-191523 ...
	I0307 22:03:57.542076   69714 cli_runner.go:164] Run: docker container inspect ha-191523 --format={{.State.Status}}
	I0307 22:03:57.559407   69714 status.go:330] ha-191523 host status = "Stopped" (err=<nil>)
	I0307 22:03:57.559426   69714 status.go:343] host is not running, skipping remaining checks
	I0307 22:03:57.559433   69714 status.go:257] ha-191523 status: &{Name:ha-191523 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:03:57.559460   69714 status.go:255] checking status of ha-191523-m02 ...
	I0307 22:03:57.559779   69714 cli_runner.go:164] Run: docker container inspect ha-191523-m02 --format={{.State.Status}}
	I0307 22:03:57.574930   69714 status.go:330] ha-191523-m02 host status = "Stopped" (err=<nil>)
	I0307 22:03:57.574949   69714 status.go:343] host is not running, skipping remaining checks
	I0307 22:03:57.574969   69714 status.go:257] ha-191523-m02 status: &{Name:ha-191523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:03:57.574990   69714 status.go:255] checking status of ha-191523-m04 ...
	I0307 22:03:57.575287   69714 cli_runner.go:164] Run: docker container inspect ha-191523-m04 --format={{.State.Status}}
	I0307 22:03:57.593359   69714 status.go:330] ha-191523-m04 host status = "Stopped" (err=<nil>)
	I0307 22:03:57.593379   69714 status.go:343] host is not running, skipping remaining checks
	I0307 22:03:57.593386   69714 status.go:257] ha-191523-m04 status: &{Name:ha-191523-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.95s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (69.77s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-191523 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0307 22:04:28.004057    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-191523 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.753846538s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (69.77s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (45.45s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-191523 --control-plane -v=7 --alsologtostderr
E0307 22:05:45.513276    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-191523 --control-plane -v=7 --alsologtostderr: (44.43901173s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-191523 status -v=7 --alsologtostderr: (1.006367323s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (45.45s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.51s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-009697 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0307 22:06:13.198214    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-009697 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m2.500430703s)
--- PASS: TestJSONOutput/start/Command (62.51s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-009697 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-009697 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.81s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-009697 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-009697 --output=json --user=testUser: (5.807292352s)
--- PASS: TestJSONOutput/stop/Command (5.81s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-443669 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-443669 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.73494ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fa11c34-ddb4-4c7e-bb46-e1fd1f3e9589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-443669] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8bb9d188-0d09-49e2-a1c0-c0b05bb13b30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18320"}}
	{"specversion":"1.0","id":"e4495288-2b3c-41a0-af86-30bda2bca7eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c6b29de8-8454-46f0-b867-b5cbe7615e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig"}}
	{"specversion":"1.0","id":"b10debcf-ee38-4aaf-8a4f-d5ab03eac960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube"}}
	{"specversion":"1.0","id":"f9a236fa-3028-4063-9b52-8e23d7e135d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8ea9b715-879a-43da-8e80-11e652da17b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c96207c3-c92d-4b0d-99f2-99b8ea99ecb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-443669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-443669
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.45s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-516871 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-516871 --network=: (38.335913944s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-516871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-516871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-516871: (2.087449739s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.45s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-776734 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-776734 --network=bridge: (31.849709486s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-776734" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-776734
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-776734: (2.012732816s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.90s)

                                                
                                    
x
+
TestKicExistingNetwork (34.84s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-382564 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-382564 --network=existing-network: (32.757180578s)
helpers_test.go:175: Cleaning up "existing-network-382564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-382564
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-382564: (1.947307757s)
--- PASS: TestKicExistingNetwork (34.84s)

                                                
                                    
x
+
TestKicCustomSubnet (35.2s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-722281 --subnet=192.168.60.0/24
E0307 22:09:28.012500    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-722281 --subnet=192.168.60.0/24: (33.087610931s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-722281 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-722281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-722281
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-722281: (2.08973733s)
--- PASS: TestKicCustomSubnet (35.20s)

                                                
                                    
x
+
TestKicStaticIP (33.7s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-951917 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-951917 --static-ip=192.168.200.200: (31.501252852s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-951917 ip
helpers_test.go:175: Cleaning up "static-ip-951917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-951917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-951917: (2.052742458s)
--- PASS: TestKicStaticIP (33.70s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-647267 --driver=docker  --container-runtime=containerd
E0307 22:10:45.512717    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-647267 --driver=docker  --container-runtime=containerd: (32.078975418s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-661544 --driver=docker  --container-runtime=containerd
E0307 22:10:51.049696    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-661544 --driver=docker  --container-runtime=containerd: (31.708642681s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-647267
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-661544
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-661544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-661544
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-661544: (1.964579066s)
helpers_test.go:175: Cleaning up "first-647267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-647267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-647267: (2.284107317s)
--- PASS: TestMinikubeProfile (69.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-124068 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-124068 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.115649305s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-124068 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-137852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-137852 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.054824286s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-137852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-124068 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-124068 --alsologtostderr -v=5: (1.638534682s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-137852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-137852
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-137852: (1.225263665s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-137852
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-137852: (6.397601093s)
--- PASS: TestMountStart/serial/RestartStopped (7.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-137852 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953982 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.827565485s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-953982 -- rollout status deployment/busybox: (4.002288627s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-mpcdb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-q7lxg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-mpcdb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-q7lxg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-mpcdb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-q7lxg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-mpcdb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-mpcdb -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-q7lxg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953982 -- exec busybox-5b5d89c9d6-q7lxg -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-953982 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-953982 -v 3 --alsologtostderr: (17.299384275s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.96s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-953982 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp testdata/cp-test.txt multinode-953982:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419771202/001/cp-test_multinode-953982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982:/home/docker/cp-test.txt multinode-953982-m02:/home/docker/cp-test_multinode-953982_multinode-953982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test_multinode-953982_multinode-953982-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982:/home/docker/cp-test.txt multinode-953982-m03:/home/docker/cp-test_multinode-953982_multinode-953982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test_multinode-953982_multinode-953982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp testdata/cp-test.txt multinode-953982-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419771202/001/cp-test_multinode-953982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m02:/home/docker/cp-test.txt multinode-953982:/home/docker/cp-test_multinode-953982-m02_multinode-953982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test_multinode-953982-m02_multinode-953982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m02:/home/docker/cp-test.txt multinode-953982-m03:/home/docker/cp-test_multinode-953982-m02_multinode-953982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test_multinode-953982-m02_multinode-953982-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp testdata/cp-test.txt multinode-953982-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3419771202/001/cp-test_multinode-953982-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m03:/home/docker/cp-test.txt multinode-953982:/home/docker/cp-test_multinode-953982-m03_multinode-953982.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982 "sudo cat /home/docker/cp-test_multinode-953982-m03_multinode-953982.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 cp multinode-953982-m03:/home/docker/cp-test.txt multinode-953982-m02:/home/docker/cp-test_multinode-953982-m03_multinode-953982-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 ssh -n multinode-953982-m02 "sudo cat /home/docker/cp-test_multinode-953982-m03_multinode-953982-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-953982 node stop m03: (1.227071728s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953982 status: exit status 7 (511.379198ms)

                                                
                                                
-- stdout --
	multinode-953982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr: exit status 7 (518.526884ms)

                                                
                                                
-- stdout --
	multinode-953982
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953982-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953982-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:13:47.215687  121313 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:13:47.215890  121313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:13:47.215915  121313 out.go:304] Setting ErrFile to fd 2...
	I0307 22:13:47.215934  121313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:13:47.216205  121313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:13:47.216436  121313 out.go:298] Setting JSON to false
	I0307 22:13:47.216507  121313 mustload.go:65] Loading cluster: multinode-953982
	I0307 22:13:47.216545  121313 notify.go:220] Checking for updates...
	I0307 22:13:47.217035  121313 config.go:182] Loaded profile config "multinode-953982": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 22:13:47.217072  121313 status.go:255] checking status of multinode-953982 ...
	I0307 22:13:47.217617  121313 cli_runner.go:164] Run: docker container inspect multinode-953982 --format={{.State.Status}}
	I0307 22:13:47.239342  121313 status.go:330] multinode-953982 host status = "Running" (err=<nil>)
	I0307 22:13:47.239364  121313 host.go:66] Checking if "multinode-953982" exists ...
	I0307 22:13:47.239690  121313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-953982
	I0307 22:13:47.255438  121313 host.go:66] Checking if "multinode-953982" exists ...
	I0307 22:13:47.255748  121313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:13:47.255792  121313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-953982
	I0307 22:13:47.284876  121313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/multinode-953982/id_rsa Username:docker}
	I0307 22:13:47.381344  121313 ssh_runner.go:195] Run: systemctl --version
	I0307 22:13:47.385944  121313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:13:47.397416  121313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:13:47.465871  121313 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-07 22:13:47.4558145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:13:47.466613  121313 kubeconfig.go:125] found "multinode-953982" server: "https://192.168.58.2:8443"
	I0307 22:13:47.466637  121313 api_server.go:166] Checking apiserver status ...
	I0307 22:13:47.466686  121313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0307 22:13:47.478186  121313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1375/cgroup
	I0307 22:13:47.487466  121313 api_server.go:182] apiserver freezer: "5:freezer:/docker/f9a89ea3f581f60291e15ddc7054a2e462590c186d6a7882e1d1cde882e8c546/kubepods/burstable/pod9b09db87d41a6a53a4db6c9705f1eedc/99aa0e0ab68e0a948d731e131f4d82ccdb2677298b68719d0ca1088ef100cf24"
	I0307 22:13:47.487543  121313 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f9a89ea3f581f60291e15ddc7054a2e462590c186d6a7882e1d1cde882e8c546/kubepods/burstable/pod9b09db87d41a6a53a4db6c9705f1eedc/99aa0e0ab68e0a948d731e131f4d82ccdb2677298b68719d0ca1088ef100cf24/freezer.state
	I0307 22:13:47.495977  121313 api_server.go:204] freezer state: "THAWED"
	I0307 22:13:47.496011  121313 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0307 22:13:47.504397  121313 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0307 22:13:47.504422  121313 status.go:422] multinode-953982 apiserver status = Running (err=<nil>)
	I0307 22:13:47.504433  121313 status.go:257] multinode-953982 status: &{Name:multinode-953982 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:13:47.504450  121313 status.go:255] checking status of multinode-953982-m02 ...
	I0307 22:13:47.504755  121313 cli_runner.go:164] Run: docker container inspect multinode-953982-m02 --format={{.State.Status}}
	I0307 22:13:47.520661  121313 status.go:330] multinode-953982-m02 host status = "Running" (err=<nil>)
	I0307 22:13:47.520689  121313 host.go:66] Checking if "multinode-953982-m02" exists ...
	I0307 22:13:47.521003  121313 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-953982-m02
	I0307 22:13:47.536696  121313 host.go:66] Checking if "multinode-953982-m02" exists ...
	I0307 22:13:47.537018  121313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0307 22:13:47.537075  121313 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-953982-m02
	I0307 22:13:47.559160  121313 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/18320-2408/.minikube/machines/multinode-953982-m02/id_rsa Username:docker}
	I0307 22:13:47.649641  121313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0307 22:13:47.660935  121313 status.go:257] multinode-953982-m02 status: &{Name:multinode-953982-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:13:47.661016  121313 status.go:255] checking status of multinode-953982-m03 ...
	I0307 22:13:47.661349  121313 cli_runner.go:164] Run: docker container inspect multinode-953982-m03 --format={{.State.Status}}
	I0307 22:13:47.677756  121313 status.go:330] multinode-953982-m03 host status = "Stopped" (err=<nil>)
	I0307 22:13:47.677781  121313 status.go:343] host is not running, skipping remaining checks
	I0307 22:13:47.677788  121313 status.go:257] multinode-953982-m03 status: &{Name:multinode-953982-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-953982 node start m03 -v=7 --alsologtostderr: (9.247403239s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953982
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-953982
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-953982: (25.126250088s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953982 --wait=true -v=8 --alsologtostderr
E0307 22:14:28.004068    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953982 --wait=true -v=8 --alsologtostderr: (1m1.967530654s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953982
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-953982 node delete m03: (5.021622794s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 stop
E0307 22:15:45.513048    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-953982 stop: (23.752056663s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953982 status: exit status 7 (88.12316ms)

                                                
                                                
-- stdout --
	multinode-953982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953982-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr: exit status 7 (92.77788ms)

                                                
                                                
-- stdout --
	multinode-953982
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953982-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:15:54.580806  129087 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:15:54.580924  129087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:15:54.580935  129087 out.go:304] Setting ErrFile to fd 2...
	I0307 22:15:54.580940  129087 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:15:54.581178  129087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:15:54.581347  129087 out.go:298] Setting JSON to false
	I0307 22:15:54.581377  129087 mustload.go:65] Loading cluster: multinode-953982
	I0307 22:15:54.581489  129087 notify.go:220] Checking for updates...
	I0307 22:15:54.581771  129087 config.go:182] Loaded profile config "multinode-953982": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 22:15:54.581781  129087 status.go:255] checking status of multinode-953982 ...
	I0307 22:15:54.582260  129087 cli_runner.go:164] Run: docker container inspect multinode-953982 --format={{.State.Status}}
	I0307 22:15:54.601965  129087 status.go:330] multinode-953982 host status = "Stopped" (err=<nil>)
	I0307 22:15:54.601989  129087 status.go:343] host is not running, skipping remaining checks
	I0307 22:15:54.601996  129087 status.go:257] multinode-953982 status: &{Name:multinode-953982 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0307 22:15:54.602035  129087 status.go:255] checking status of multinode-953982-m02 ...
	I0307 22:15:54.602357  129087 cli_runner.go:164] Run: docker container inspect multinode-953982-m02 --format={{.State.Status}}
	I0307 22:15:54.617517  129087 status.go:330] multinode-953982-m02 host status = "Stopped" (err=<nil>)
	I0307 22:15:54.617541  129087 status.go:343] host is not running, skipping remaining checks
	I0307 22:15:54.617548  129087 status.go:257] multinode-953982-m02 status: &{Name:multinode-953982-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953982 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953982 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.620155004s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953982 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953982
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953982-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-953982-m02 --driver=docker  --container-runtime=containerd: exit status 14 (99.527633ms)

                                                
                                                
-- stdout --
	* [multinode-953982-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-953982-m02' is duplicated with machine name 'multinode-953982-m02' in profile 'multinode-953982'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953982-m03 --driver=docker  --container-runtime=containerd
E0307 22:17:08.559268    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953982-m03 --driver=docker  --container-runtime=containerd: (31.554868712s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-953982
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-953982: exit status 80 (319.517269ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-953982 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-953982-m03 already exists in multinode-953982-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-953982-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-953982-m03: (1.960054079s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.00s)

                                                
                                    
x
+
TestPreload (125.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618871 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m24.028600044s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618871 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-618871 image pull gcr.io/k8s-minikube/busybox: (1.250505248s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-618871
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-618871: (12.037848708s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-618871 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0307 22:19:28.003736    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-618871 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (24.804440015s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-618871 image list
helpers_test.go:175: Cleaning up "test-preload-618871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-618871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-618871: (2.476399462s)
--- PASS: TestPreload (125.09s)

                                                
                                    
x
+
TestScheduledStopUnix (105.43s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-417469 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-417469 --memory=2048 --driver=docker  --container-runtime=containerd: (29.332872883s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-417469 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-417469 -n scheduled-stop-417469
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-417469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-417469 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-417469 -n scheduled-stop-417469
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-417469
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-417469 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0307 22:20:45.512646    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-417469
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-417469: exit status 7 (71.761323ms)

                                                
                                                
-- stdout --
	scheduled-stop-417469
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-417469 -n scheduled-stop-417469
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-417469 -n scheduled-stop-417469: exit status 7 (69.323829ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-417469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-417469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-417469: (4.51272609s)
--- PASS: TestScheduledStopUnix (105.43s)

                                                
                                    
x
+
TestInsufficientStorage (10.62s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-151957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-151957 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.198642575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5c140512-c8a3-4b3e-821d-81541424a129","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-151957] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fddcc47-e336-4f0f-96e6-cef97f69c963","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18320"}}
	{"specversion":"1.0","id":"01abd22a-b983-4fde-b918-8b84b65fe03e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a79b6dd2-ae3a-4e27-bc66-45c26f21f9f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig"}}
	{"specversion":"1.0","id":"d05b8592-bb7c-4b29-83c2-3746cd519c20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube"}}
	{"specversion":"1.0","id":"c144ee48-e1da-48f7-839c-663ac3bf675e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"978be4a6-e70f-41a6-8cc4-5d981b39b1e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f019750-da9e-4913-8fa1-ef573bd38038","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1f8cad40-4b9e-4cef-9648-5dc2e23e25a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2b51c5d9-ceee-458b-b146-1d944c8c23ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"039ad12e-4182-420c-a782-40fa3b658cd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c123aba4-e0b7-498e-b8bb-2a32d2ef1095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-151957\" primary control-plane node in \"insufficient-storage-151957\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5de5015-2f40-426d-9185-5c6d4f5691fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"99dd48eb-85c3-4a1d-bf6d-e6ed372e9e70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cca82375-bb46-4483-849a-12ef5c3663d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-151957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-151957 --output=json --layout=cluster: exit status 7 (278.043509ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 22:21:26.826737  146732 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-151957" does not appear in /home/jenkins/minikube-integration/18320-2408/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-151957 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-151957 --output=json --layout=cluster: exit status 7 (280.978616ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151957","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151957","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0307 22:21:27.109198  146783 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-151957" does not appear in /home/jenkins/minikube-integration/18320-2408/kubeconfig
	E0307 22:21:27.119352  146783 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/insufficient-storage-151957/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-151957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-151957
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-151957: (1.863688404s)
--- PASS: TestInsufficientStorage (10.62s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1953453931 start -p running-upgrade-890001 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1953453931 start -p running-upgrade-890001 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.220478785s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-890001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-890001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.652141406s)
helpers_test.go:175: Cleaning up "running-upgrade-890001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-890001
E0307 22:27:31.049916    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-890001: (2.802279211s)
--- PASS: TestRunningBinaryUpgrade (85.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (385.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (57.911660355s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-325852
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-325852: (1.417280935s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-325852 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-325852 status --format={{.Host}}: exit status 7 (169.014733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m3.169581775s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-325852 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (123.89208ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325852] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-325852
	    minikube start -p kubernetes-upgrade-325852 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3258522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-325852 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-325852 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.069321829s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-325852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-325852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-325852: (2.458178938s)
--- PASS: TestKubernetesUpgrade (385.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (169.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3495619840 start -p missing-upgrade-010087 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3495619840 start -p missing-upgrade-010087 --memory=2200 --driver=docker  --container-runtime=containerd: (1m33.48924675s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-010087
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-010087: (10.277126182s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-010087
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-010087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-010087 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.266030673s)
helpers_test.go:175: Cleaning up "missing-upgrade-010087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-010087
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-010087: (2.201764258s)
--- PASS: TestMissingContainerUpgrade (169.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (86.194168ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-216264] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-216264 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-216264 --driver=docker  --container-runtime=containerd: (39.211214217s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-216264 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.424399008s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-216264 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-216264 status -o json: exit status 2 (286.586462ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-216264","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-216264
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-216264: (1.851009769s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-216264 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.138994499s)
--- PASS: TestNoKubernetes/serial/Start (5.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-216264 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-216264 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.600004ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-216264
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-216264: (1.19901846s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-216264 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-216264 --driver=docker  --container-runtime=containerd: (6.737119621s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-216264 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-216264 "sudo systemctl is-active --quiet service kubelet": exit status 1 (335.167861ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2117554214 start -p stopped-upgrade-565285 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0307 22:24:28.008499    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2117554214 start -p stopped-upgrade-565285 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (44.786539762s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2117554214 -p stopped-upgrade-565285 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2117554214 -p stopped-upgrade-565285 stop: (19.918872685s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-565285 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0307 22:25:45.512422    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-565285 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.826943103s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (102.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-565285
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-565285: (1.142503355s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (58.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-797830 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-797830 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (58.379890245s)
--- PASS: TestPause/serial/Start (58.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-797830 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-797830 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.182501208s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.22s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-797830 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-797830 --alsologtostderr -v=5: (1.158401798s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-797830 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-797830 --output=json --layout=cluster: exit status 2 (323.43059ms)

                                                
                                                
-- stdout --
	{"Name":"pause-797830","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-797830","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-797830 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.03s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-797830 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-797830 --alsologtostderr -v=5: (1.028590008s)
--- PASS: TestPause/serial/PauseAgain (1.03s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.19s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-797830 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-797830 --alsologtostderr -v=5: (3.193897243s)
--- PASS: TestPause/serial/DeletePaused (3.19s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-797830
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-797830: exit status 1 (21.678526ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-797830: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-608644 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-608644 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (255.561913ms)

                                                
                                                
-- stdout --
	* [false-608644] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18320
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0307 22:29:12.128042  186678 out.go:291] Setting OutFile to fd 1 ...
	I0307 22:29:12.128338  186678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:29:12.128368  186678 out.go:304] Setting ErrFile to fd 2...
	I0307 22:29:12.128386  186678 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0307 22:29:12.128660  186678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18320-2408/.minikube/bin
	I0307 22:29:12.129110  186678 out.go:298] Setting JSON to false
	I0307 22:29:12.130072  186678 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4296,"bootTime":1709846257,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0307 22:29:12.130171  186678 start.go:139] virtualization:  
	I0307 22:29:12.133534  186678 out.go:177] * [false-608644] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0307 22:29:12.136109  186678 out.go:177]   - MINIKUBE_LOCATION=18320
	I0307 22:29:12.138076  186678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0307 22:29:12.136168  186678 notify.go:220] Checking for updates...
	I0307 22:29:12.140101  186678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18320-2408/kubeconfig
	I0307 22:29:12.142357  186678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18320-2408/.minikube
	I0307 22:29:12.144742  186678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0307 22:29:12.146793  186678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0307 22:29:12.149080  186678 config.go:182] Loaded profile config "force-systemd-flag-026071": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0307 22:29:12.149191  186678 driver.go:392] Setting default libvirt URI to qemu:///system
	I0307 22:29:12.196453  186678 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0307 22:29:12.196571  186678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0307 22:29:12.301526  186678 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-07 22:29:12.289561393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0307 22:29:12.301629  186678 docker.go:295] overlay module found
	I0307 22:29:12.304138  186678 out.go:177] * Using the docker driver based on user configuration
	I0307 22:29:12.305724  186678 start.go:297] selected driver: docker
	I0307 22:29:12.305746  186678 start.go:901] validating driver "docker" against <nil>
	I0307 22:29:12.305760  186678 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0307 22:29:12.308367  186678 out.go:177] 
	W0307 22:29:12.309987  186678 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0307 22:29:12.311946  186678 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-608644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-608644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-608644"

                                                
                                                
----------------------- debugLogs end: false-608644 [took: 5.046006403s] --------------------------------
helpers_test.go:175: Cleaning up "false-608644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-608644
--- PASS: TestNetworkPlugins/group/false (5.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (145.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-497253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-497253 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m25.120464834s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (145.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-497253 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [96bdd65d-e948-4a87-bae3-2471e85fa3ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [96bdd65d-e948-4a87-bae3-2471e85fa3ac] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004293808s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-497253 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-497253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-497253 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.218635728s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-497253 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (81.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-767597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-767597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m21.014633317s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (81.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-497253 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-497253 --alsologtostderr -v=3: (14.241452867s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497253 -n old-k8s-version-497253
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-497253 -n old-k8s-version-497253: exit status 7 (97.742331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-497253 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-767597 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e2d97250-78ea-4430-812e-481b527504a1] Pending
helpers_test.go:344: "busybox" [e2d97250-78ea-4430-812e-481b527504a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e2d97250-78ea-4430-812e-481b527504a1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004079342s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-767597 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-767597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-767597 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.159180604s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-767597 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-767597 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-767597 --alsologtostderr -v=3: (12.115371077s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-767597 -n no-preload-767597
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-767597 -n no-preload-767597: exit status 7 (85.50812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-767597 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-767597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0307 22:35:45.512354    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
E0307 22:39:28.003911    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-767597 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m25.924183509s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-767597 -n no-preload-767597
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lg2ln" [e98fe3b5-b1ae-4c8e-b287-23eb1220943b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004434639s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lg2ln" [e98fe3b5-b1ae-4c8e-b287-23eb1220943b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004522887s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-767597 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-767597 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-767597 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-767597 -n no-preload-767597
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-767597 -n no-preload-767597: exit status 2 (339.151128ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-767597 -n no-preload-767597
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-767597 -n no-preload-767597: exit status 2 (334.022465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-767597 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-767597 -n no-preload-767597
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-767597 -n no-preload-767597
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-634269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-634269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m7.646186853s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dsjvb" [1cc6a43e-1084-4cc5-83ff-0952c6a034fe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006518612s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dsjvb" [1cc6a43e-1084-4cc5-83ff-0952c6a034fe] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004903239s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-497253 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-497253 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-497253 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-497253 --alsologtostderr -v=1: (1.072956923s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497253 -n old-k8s-version-497253
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497253 -n old-k8s-version-497253: exit status 2 (427.942057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497253 -n old-k8s-version-497253
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497253 -n old-k8s-version-497253: exit status 2 (396.83793ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-497253 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-497253 -n old-k8s-version-497253
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-497253 -n old-k8s-version-497253
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-418544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 22:40:45.512978    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-418544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m3.560496571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-634269 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [77dc25fc-712c-4baa-b227-a8bd3f86c327] Pending
helpers_test.go:344: "busybox" [77dc25fc-712c-4baa-b227-a8bd3f86c327] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [77dc25fc-712c-4baa-b227-a8bd3f86c327] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004653028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-634269 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-634269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-634269 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120912517s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-634269 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-634269 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-634269 --alsologtostderr -v=3: (12.079421688s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-418544 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1aa64da1-feef-4754-a3ad-64a31fe3744a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1aa64da1-feef-4754-a3ad-64a31fe3744a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004208092s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-418544 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634269 -n embed-certs-634269
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634269 -n embed-certs-634269: exit status 7 (76.225365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-634269 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-634269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-634269 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m27.992741244s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-634269 -n embed-certs-634269
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-418544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-418544 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.395745056s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-418544 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-418544 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-418544 --alsologtostderr -v=3: (12.430078873s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544: exit status 7 (81.740052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-418544 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-418544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0307 22:43:12.150389    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.156006    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.166329    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.186633    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.226879    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.307148    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.467555    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:12.787884    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:13.428743    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:14.709208    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:17.269521    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:22.390486    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:32.631219    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:43:53.111641    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:44:11.050518    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 22:44:28.005330    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
E0307 22:44:34.072085    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
E0307 22:44:43.695814    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:43.701163    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:43.711451    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:43.731738    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:43.772052    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:43.852341    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:44.012773    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:44.333453    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:44.974291    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:46.254744    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:48.815645    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:44:53.935875    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:45:04.176380    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:45:24.657314    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
E0307 22:45:45.512335    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/functional-894723/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-418544 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m29.435743236s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4zhpr" [e08c0fef-40ec-43e3-a180-b4742a39d344] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00466758s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-4zhpr" [e08c0fef-40ec-43e3-a180-b4742a39d344] Running
E0307 22:45:55.992565    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004222057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-634269 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-634269 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-634269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634269 -n embed-certs-634269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634269 -n embed-certs-634269: exit status 2 (328.942065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634269 -n embed-certs-634269
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634269 -n embed-certs-634269: exit status 2 (346.231865ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-634269 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-634269 -n embed-certs-634269
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-634269 -n embed-certs-634269
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499298 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499298 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (48.264098582s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dqfhj" [d68cd6b3-3509-4cae-8137-ecde50bb7025] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.012825118s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dqfhj" [d68cd6b3-3509-4cae-8137-ecde50bb7025] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00475567s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-418544 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-418544 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-418544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544: exit status 2 (414.770602ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544: exit status 2 (443.606993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-418544 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-418544 -n default-k8s-diff-port-418544
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m6.598896107s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-499298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.253902781s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-499298 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-499298 --alsologtostderr -v=3: (1.407458249s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499298 -n newest-cni-499298
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499298 -n newest-cni-499298: exit status 7 (162.649767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-499298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-499298 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-499298 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (16.447536966s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-499298 -n newest-cni-499298
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-499298 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-499298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499298 -n newest-cni-499298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499298 -n newest-cni-499298: exit status 2 (494.24038ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499298 -n newest-cni-499298
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499298 -n newest-cni-499298: exit status 2 (590.754499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-499298 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-499298 --alsologtostderr -v=1: (1.081757646s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-499298 -n newest-cni-499298
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-499298 -n newest-cni-499298
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.28s)
E0307 22:52:35.159111    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.164445    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.174751    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.195064    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.235349    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.315701    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.476069    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:35.797041    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:36.438011    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:37.438129    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:52:37.718437    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:40.279552    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:45.400406    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory
E0307 22:52:55.641448    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/auto-608644/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0307 22:47:27.539647    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (59.104864739s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sj6qc" [dd6060f6-8a0e-4404-b9d3-cc3bc2a089dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sj6qc" [dd6060f6-8a0e-4404-b9d3-cc3bc2a089dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004971558s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0307 22:48:12.150294    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m16.714614755s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-tj722" [9148edca-9359-496a-9121-0ef145e7426f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0051054s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pl8lv" [d6b530ae-0a88-499a-bf6d-335ea9e7235b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pl8lv" [d6b530ae-0a88-499a-bf6d-335ea9e7235b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004189191s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m8.912106129s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-4pq6t" [a702331b-bfd5-4fb1-932c-f94bd31d3645] Running
E0307 22:49:28.012206    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/addons-963512/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005805332s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m957q" [e644338b-d040-4b88-9e3f-c67f0ee1c35e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m957q" [e644338b-d040-4b88-9e3f-c67f0ee1c35e] Running
E0307 22:49:43.696379    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004344952s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0307 22:50:11.380389    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/no-preload-767597/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m25.524252203s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cglqq" [c2828f71-891e-439c-83e2-b262c14f8ae0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cglqq" [c2828f71-891e-439c-83e2-b262c14f8ae0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004177801s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0307 22:51:15.517997    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.523308    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.533585    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.553813    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.594043    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.674357    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:15.834550    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:16.154793    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:16.795481    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:18.075963    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:20.636493    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
E0307 22:51:25.756952    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.979950337s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tc42l" [e0a04b55-031e-42bd-962f-ea8ad83da4b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0307 22:51:35.997160    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-tc42l" [e0a04b55-031e-42bd-962f-ea8ad83da4b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003595199s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-296gn" [3ab2c84b-f5ac-46c1-8c6a-fc97329a0df7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00489049s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sph7x" [289f6864-6b56-4e00-a79c-bb07c6b90c1e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sph7x" [289f6864-6b56-4e00-a79c-bb07c6b90c1e] Running
E0307 22:51:56.477357    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/default-k8s-diff-port-418544/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004014762s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-608644 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (54.482821084s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-608644 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-608644 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-szjlz" [d929a2f7-2695-4e35-a077-392ec2163d6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-szjlz" [d929a2f7-2695-4e35-a077-392ec2163d6e] Running
E0307 22:53:12.150521    7764 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18320-2408/.minikube/profiles/old-k8s-version-497253/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003791743s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-608644 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-608644 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-983577 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-983577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-983577
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-385246" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-385246
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-608644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-608644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-608644"

                                                
                                                
----------------------- debugLogs end: kubenet-608644 [took: 4.730857412s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-608644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-608644
--- SKIP: TestNetworkPlugins/group/kubenet (4.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-608644 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-608644" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-608644

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-608644" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608644"

                                                
                                                
----------------------- debugLogs end: cilium-608644 [took: 5.944887985s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-608644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-608644
--- SKIP: TestNetworkPlugins/group/cilium (6.10s)

                                                
                                    
Copied to clipboard