Test Report: Docker_Linux_containerd_arm64 18358

                    
                      2f1fe73fe0a81db98fd5a1fcfb9006c4b42c71ed:2024-03-12:33520
                    
                

Test fail (7/335)

x
+
TestAddons/parallel/Ingress (38.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-340965 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-340965 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-340965 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5bd2e9ee-9b6c-43d8-bb9b-2d9362b613da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5bd2e9ee-9b6c-43d8-bb9b-2d9362b613da] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004274186s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-340965 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.066505492s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-340965 addons disable ingress-dns --alsologtostderr -v=1: (2.391841091s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-340965 addons disable ingress --alsologtostderr -v=1: (7.800833168s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-340965
helpers_test.go:235: (dbg) docker inspect addons-340965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f",
	        "Created": "2024-03-11T23:34:08.970183812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 988957,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-11T23:34:09.288248771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f/hosts",
	        "LogPath": "/var/lib/docker/containers/2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f/2a2013cb2c528cc8bd00d6649d133df00a87fb4a4178a1cb85a1ee81f399137f-json.log",
	        "Name": "/addons-340965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-340965:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-340965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d17741bb473dc7c178ab5058e241005cbe65467b871b986e63b8df689b7c9a3f-init/diff:/var/lib/docker/overlay2/af090fb944a3b68787e040c2e3137e8bdfd21b050bcd01e191acaa1449d77a1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d17741bb473dc7c178ab5058e241005cbe65467b871b986e63b8df689b7c9a3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d17741bb473dc7c178ab5058e241005cbe65467b871b986e63b8df689b7c9a3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d17741bb473dc7c178ab5058e241005cbe65467b871b986e63b8df689b7c9a3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-340965",
	                "Source": "/var/lib/docker/volumes/addons-340965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-340965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-340965",
	                "name.minikube.sigs.k8s.io": "addons-340965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c62c0987401b4eb48bc9a25e35d79b263ed58de606584b40fa54cb16d7a2ec12",
	            "SandboxKey": "/var/run/docker/netns/c62c0987401b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33902"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33901"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33898"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33900"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-340965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2a2013cb2c52",
	                        "addons-340965"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "50ce0ba9ec361b6b1fe0b64b1cef1c4c578927f5e65ceb790228654033c69af1",
	                    "EndpointID": "fa754f26ccc124ce69030264cf8dbb9c570338195542b790ba2404c54f1bf518",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-340965",
	                        "2a2013cb2c52"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-340965 -n addons-340965
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-340965 logs -n 25: (1.508590192s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-120081              | download-only-120081   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | -o=json --download-only              | download-only-080906   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-080906              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-080906              | download-only-080906   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | -o=json --download-only              | download-only-667507   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-667507              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2    |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-667507              | download-only-667507   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-120081              | download-only-120081   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-080906              | download-only-080906   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-667507              | download-only-667507   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | --download-only -p                   | download-docker-098686 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | download-docker-098686               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-098686            | download-docker-098686 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | --download-only -p                   | binary-mirror-718612   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | binary-mirror-718612                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35069               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-718612              | binary-mirror-718612   | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| addons  | enable dashboard -p                  | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | addons-340965                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | addons-340965                        |                        |         |         |                     |                     |
	| start   | -p addons-340965 --wait=true         | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:35 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| ip      | addons-340965 ip                     | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:35 UTC | 11 Mar 24 23:35 UTC |
	| addons  | addons-340965 addons disable         | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:35 UTC | 11 Mar 24 23:35 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-340965 addons                 | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	|         | addons-340965                        |                        |         |         |                     |                     |
	| ssh     | addons-340965 ssh curl -s            | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-340965 ip                     | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	| addons  | addons-340965 addons disable         | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-340965 addons disable         | addons-340965          | jenkins | v1.32.0 | 11 Mar 24 23:36 UTC | 11 Mar 24 23:36 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 23:33:45
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 23:33:45.863804  988495 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:33:45.863991  988495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:45.864003  988495 out.go:304] Setting ErrFile to fd 2...
	I0311 23:33:45.864009  988495 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:45.864269  988495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:33:45.864763  988495 out.go:298] Setting JSON to false
	I0311 23:33:45.865641  988495 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15374,"bootTime":1710184652,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:33:45.865710  988495 start.go:139] virtualization:  
	I0311 23:33:45.868428  988495 out.go:177] * [addons-340965] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 23:33:45.870820  988495 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 23:33:45.872540  988495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:33:45.870853  988495 notify.go:220] Checking for updates...
	I0311 23:33:45.876210  988495 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:33:45.878265  988495 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:33:45.879974  988495 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 23:33:45.881636  988495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 23:33:45.883506  988495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:33:45.905766  988495 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:33:45.905884  988495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:45.967895  988495 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 23:33:45.958412433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:45.968009  988495 docker.go:295] overlay module found
	I0311 23:33:45.970163  988495 out.go:177] * Using the docker driver based on user configuration
	I0311 23:33:45.972115  988495 start.go:297] selected driver: docker
	I0311 23:33:45.972137  988495 start.go:901] validating driver "docker" against <nil>
	I0311 23:33:45.972153  988495 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 23:33:45.972772  988495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:46.032701  988495 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 23:33:46.023263592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:46.032878  988495 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 23:33:46.033110  988495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 23:33:46.035101  988495 out.go:177] * Using Docker driver with root privileges
	I0311 23:33:46.037134  988495 cni.go:84] Creating CNI manager for ""
	I0311 23:33:46.037158  988495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:33:46.037177  988495 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 23:33:46.037264  988495 start.go:340] cluster config:
	{Name:addons-340965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-340965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:33:46.039372  988495 out.go:177] * Starting "addons-340965" primary control-plane node in "addons-340965" cluster
	I0311 23:33:46.041194  988495 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 23:33:46.043067  988495 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0311 23:33:46.044712  988495 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 23:33:46.044737  988495 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 23:33:46.044760  988495 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 23:33:46.044769  988495 cache.go:56] Caching tarball of preloaded images
	I0311 23:33:46.044849  988495 preload.go:173] Found /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0311 23:33:46.044859  988495 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on containerd
	I0311 23:33:46.045242  988495 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/config.json ...
	I0311 23:33:46.045279  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/config.json: {Name:mk2429a5b466d80a5558e8649bcd19697ec3cdd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:33:46.059792  988495 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 23:33:46.059932  988495 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 23:33:46.059955  988495 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 23:33:46.059964  988495 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 23:33:46.059972  988495 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 23:33:46.059981  988495 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from local cache
	I0311 23:34:01.940571  988495 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 from cached tarball
	I0311 23:34:01.940612  988495 cache.go:194] Successfully downloaded all kic artifacts
	I0311 23:34:01.940644  988495 start.go:360] acquireMachinesLock for addons-340965: {Name:mk5067a8ff42ef68cd5f8142a11e5f1bbc82fa9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0311 23:34:01.941220  988495 start.go:364] duration metric: took 551.836µs to acquireMachinesLock for "addons-340965"
	I0311 23:34:01.941260  988495 start.go:93] Provisioning new machine with config: &{Name:addons-340965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-340965 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 23:34:01.941355  988495 start.go:125] createHost starting for "" (driver="docker")
	I0311 23:34:01.944155  988495 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0311 23:34:01.944437  988495 start.go:159] libmachine.API.Create for "addons-340965" (driver="docker")
	I0311 23:34:01.944475  988495 client.go:168] LocalClient.Create starting
	I0311 23:34:01.944607  988495 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem
	I0311 23:34:02.101213  988495 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem
	I0311 23:34:02.377258  988495 cli_runner.go:164] Run: docker network inspect addons-340965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0311 23:34:02.392489  988495 cli_runner.go:211] docker network inspect addons-340965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0311 23:34:02.392587  988495 network_create.go:281] running [docker network inspect addons-340965] to gather additional debugging logs...
	I0311 23:34:02.392608  988495 cli_runner.go:164] Run: docker network inspect addons-340965
	W0311 23:34:02.407179  988495 cli_runner.go:211] docker network inspect addons-340965 returned with exit code 1
	I0311 23:34:02.407222  988495 network_create.go:284] error running [docker network inspect addons-340965]: docker network inspect addons-340965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-340965 not found
	I0311 23:34:02.407234  988495 network_create.go:286] output of [docker network inspect addons-340965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-340965 not found
	
	** /stderr **
	I0311 23:34:02.407405  988495 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 23:34:02.422878  988495 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400251b380}
	I0311 23:34:02.422918  988495 network_create.go:124] attempt to create docker network addons-340965 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0311 23:34:02.422979  988495 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-340965 addons-340965
	I0311 23:34:02.485018  988495 network_create.go:108] docker network addons-340965 192.168.49.0/24 created
	I0311 23:34:02.485066  988495 kic.go:121] calculated static IP "192.168.49.2" for the "addons-340965" container
	I0311 23:34:02.485137  988495 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0311 23:34:02.499926  988495 cli_runner.go:164] Run: docker volume create addons-340965 --label name.minikube.sigs.k8s.io=addons-340965 --label created_by.minikube.sigs.k8s.io=true
	I0311 23:34:02.517138  988495 oci.go:103] Successfully created a docker volume addons-340965
	I0311 23:34:02.517232  988495 cli_runner.go:164] Run: docker run --rm --name addons-340965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340965 --entrypoint /usr/bin/test -v addons-340965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib
	I0311 23:34:04.698842  988495 cli_runner.go:217] Completed: docker run --rm --name addons-340965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340965 --entrypoint /usr/bin/test -v addons-340965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -d /var/lib: (2.181571345s)
	I0311 23:34:04.698875  988495 oci.go:107] Successfully prepared a docker volume addons-340965
	I0311 23:34:04.698899  988495 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 23:34:04.698920  988495 kic.go:194] Starting extracting preloaded images to volume ...
	I0311 23:34:04.699012  988495 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir
	I0311 23:34:08.904180  988495 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-340965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 -I lz4 -xf /preloaded.tar -C /extractDir: (4.205130041s)
	I0311 23:34:08.904214  988495 kic.go:203] duration metric: took 4.205289839s to extract preloaded images to volume ...
	W0311 23:34:08.904385  988495 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0311 23:34:08.904500  988495 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0311 23:34:08.956143  988495 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-340965 --name addons-340965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-340965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-340965 --network addons-340965 --ip 192.168.49.2 --volume addons-340965:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08
	I0311 23:34:09.297260  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Running}}
	I0311 23:34:09.326746  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:09.346984  988495 cli_runner.go:164] Run: docker exec addons-340965 stat /var/lib/dpkg/alternatives/iptables
	I0311 23:34:09.396970  988495 oci.go:144] the created container "addons-340965" has a running status.
	I0311 23:34:09.397002  988495 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa...
	I0311 23:34:10.494201  988495 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0311 23:34:10.515008  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:10.532784  988495 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0311 23:34:10.532807  988495 kic_runner.go:114] Args: [docker exec --privileged addons-340965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0311 23:34:10.597225  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:10.613224  988495 machine.go:94] provisionDockerMachine start ...
	I0311 23:34:10.613317  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:10.628692  988495 main.go:141] libmachine: Using SSH client type: native
	I0311 23:34:10.628978  988495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I0311 23:34:10.628995  988495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0311 23:34:10.758642  988495 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340965
	
	I0311 23:34:10.758668  988495 ubuntu.go:169] provisioning hostname "addons-340965"
	I0311 23:34:10.758736  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:10.777517  988495 main.go:141] libmachine: Using SSH client type: native
	I0311 23:34:10.777762  988495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I0311 23:34:10.777779  988495 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-340965 && echo "addons-340965" | sudo tee /etc/hostname
	I0311 23:34:10.919730  988495 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-340965
	
	I0311 23:34:10.919860  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:10.936961  988495 main.go:141] libmachine: Using SSH client type: native
	I0311 23:34:10.937210  988495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 33902 <nil> <nil>}
	I0311 23:34:10.937231  988495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-340965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-340965/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-340965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0311 23:34:11.067775  988495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0311 23:34:11.067803  988495 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18358-982285/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-982285/.minikube}
	I0311 23:34:11.067835  988495 ubuntu.go:177] setting up certificates
	I0311 23:34:11.067846  988495 provision.go:84] configureAuth start
	I0311 23:34:11.067933  988495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340965
	I0311 23:34:11.084487  988495 provision.go:143] copyHostCerts
	I0311 23:34:11.084574  988495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem (1082 bytes)
	I0311 23:34:11.084827  988495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem (1123 bytes)
	I0311 23:34:11.084933  988495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem (1679 bytes)
	I0311 23:34:11.084987  988495 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem org=jenkins.addons-340965 san=[127.0.0.1 192.168.49.2 addons-340965 localhost minikube]
	I0311 23:34:11.512884  988495 provision.go:177] copyRemoteCerts
	I0311 23:34:11.512969  988495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0311 23:34:11.513026  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:11.528193  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:11.619859  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0311 23:34:11.644354  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0311 23:34:11.668725  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0311 23:34:11.691914  988495 provision.go:87] duration metric: took 624.047148ms to configureAuth
	I0311 23:34:11.691938  988495 ubuntu.go:193] setting minikube options for container-runtime
	I0311 23:34:11.692138  988495 config.go:182] Loaded profile config "addons-340965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:34:11.692146  988495 machine.go:97] duration metric: took 1.078904808s to provisionDockerMachine
	I0311 23:34:11.692153  988495 client.go:171] duration metric: took 9.747666311s to LocalClient.Create
	I0311 23:34:11.692167  988495 start.go:167] duration metric: took 9.747731105s to libmachine.API.Create "addons-340965"
	I0311 23:34:11.692174  988495 start.go:293] postStartSetup for "addons-340965" (driver="docker")
	I0311 23:34:11.692184  988495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0311 23:34:11.692233  988495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0311 23:34:11.692272  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:11.707733  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:11.801886  988495 ssh_runner.go:195] Run: cat /etc/os-release
	I0311 23:34:11.805284  988495 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0311 23:34:11.805320  988495 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0311 23:34:11.805332  988495 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0311 23:34:11.805339  988495 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0311 23:34:11.805363  988495 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/addons for local assets ...
	I0311 23:34:11.805435  988495 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/files for local assets ...
	I0311 23:34:11.805465  988495 start.go:296] duration metric: took 113.285847ms for postStartSetup
	I0311 23:34:11.805789  988495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340965
	I0311 23:34:11.822010  988495 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/config.json ...
	I0311 23:34:11.822307  988495 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:34:11.822360  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:11.840586  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:11.932274  988495 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0311 23:34:11.937070  988495 start.go:128] duration metric: took 9.995698093s to createHost
	I0311 23:34:11.937094  988495 start.go:83] releasing machines lock for "addons-340965", held for 9.99585538s
	I0311 23:34:11.937185  988495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-340965
	I0311 23:34:11.954704  988495 ssh_runner.go:195] Run: cat /version.json
	I0311 23:34:11.954758  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:11.954774  988495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0311 23:34:11.954812  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:11.985775  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:11.986544  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:12.196386  988495 ssh_runner.go:195] Run: systemctl --version
	I0311 23:34:12.200765  988495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0311 23:34:12.204759  988495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0311 23:34:12.229218  988495 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0311 23:34:12.229327  988495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0311 23:34:12.258144  988495 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0311 23:34:12.258220  988495 start.go:494] detecting cgroup driver to use...
	I0311 23:34:12.258267  988495 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0311 23:34:12.258339  988495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0311 23:34:12.271009  988495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0311 23:34:12.282624  988495 docker.go:217] disabling cri-docker service (if available) ...
	I0311 23:34:12.282693  988495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0311 23:34:12.296672  988495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0311 23:34:12.311825  988495 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0311 23:34:12.392201  988495 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0311 23:34:12.489794  988495 docker.go:233] disabling docker service ...
	I0311 23:34:12.489866  988495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0311 23:34:12.509299  988495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0311 23:34:12.520522  988495 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0311 23:34:12.607982  988495 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0311 23:34:12.700661  988495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0311 23:34:12.712735  988495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0311 23:34:12.729508  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0311 23:34:12.739957  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0311 23:34:12.750200  988495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0311 23:34:12.750325  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0311 23:34:12.760579  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 23:34:12.770669  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0311 23:34:12.780774  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0311 23:34:12.790412  988495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0311 23:34:12.807031  988495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0311 23:34:12.818061  988495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0311 23:34:12.826460  988495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0311 23:34:12.835024  988495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 23:34:12.919134  988495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0311 23:34:13.055536  988495 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0311 23:34:13.055644  988495 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0311 23:34:13.059092  988495 start.go:562] Will wait 60s for crictl version
	I0311 23:34:13.059156  988495 ssh_runner.go:195] Run: which crictl
	I0311 23:34:13.062534  988495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0311 23:34:13.101000  988495 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0311 23:34:13.101109  988495 ssh_runner.go:195] Run: containerd --version
	I0311 23:34:13.122498  988495 ssh_runner.go:195] Run: containerd --version
	I0311 23:34:13.147701  988495 out.go:177] * Preparing Kubernetes v1.28.4 on containerd 1.6.28 ...
	I0311 23:34:13.149521  988495 cli_runner.go:164] Run: docker network inspect addons-340965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0311 23:34:13.165523  988495 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0311 23:34:13.169248  988495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 23:34:13.179804  988495 kubeadm.go:877] updating cluster {Name:addons-340965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-340965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0311 23:34:13.179932  988495 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 23:34:13.179993  988495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 23:34:13.215639  988495 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 23:34:13.215661  988495 containerd.go:519] Images already preloaded, skipping extraction
	I0311 23:34:13.215728  988495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0311 23:34:13.254599  988495 containerd.go:612] all images are preloaded for containerd runtime.
	I0311 23:34:13.254621  988495 cache_images.go:84] Images are preloaded, skipping loading
	I0311 23:34:13.254629  988495 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 containerd true true} ...
	I0311 23:34:13.254731  988495 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-340965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-340965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0311 23:34:13.254798  988495 ssh_runner.go:195] Run: sudo crictl info
	I0311 23:34:13.290262  988495 cni.go:84] Creating CNI manager for ""
	I0311 23:34:13.290285  988495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:34:13.290294  988495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0311 23:34:13.290316  988495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-340965 NodeName:addons-340965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0311 23:34:13.290447  988495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-340965"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0311 23:34:13.290517  988495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0311 23:34:13.299487  988495 binaries.go:44] Found k8s binaries, skipping transfer
	I0311 23:34:13.299564  988495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0311 23:34:13.308222  988495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0311 23:34:13.326544  988495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0311 23:34:13.344906  988495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0311 23:34:13.362608  988495 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0311 23:34:13.365992  988495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0311 23:34:13.376959  988495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 23:34:13.458260  988495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 23:34:13.471835  988495 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965 for IP: 192.168.49.2
	I0311 23:34:13.471858  988495 certs.go:194] generating shared ca certs ...
	I0311 23:34:13.471878  988495 certs.go:226] acquiring lock for ca certs: {Name:mk0a8924146da92e76e9ff4162540f84539e9725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:13.472601  988495 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key
	I0311 23:34:14.092691  988495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt ...
	I0311 23:34:14.092722  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt: {Name:mk997857594f54b1496be6199caf57bbbad16635 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.093519  988495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key ...
	I0311 23:34:14.093537  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key: {Name:mke1a9d0202ff5774d1037569989f02af4579911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.094126  988495 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key
	I0311 23:34:14.437251  988495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.crt ...
	I0311 23:34:14.437280  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.crt: {Name:mk41fee482a60342b22ad5a3e0290a4512f75fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.437462  988495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key ...
	I0311 23:34:14.437475  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key: {Name:mkf9dadbec6731ba34d9e53eabf35a7023b53b27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.437557  988495 certs.go:256] generating profile certs ...
	I0311 23:34:14.437624  988495 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.key
	I0311 23:34:14.437644  988495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt with IP's: []
	I0311 23:34:14.765647  988495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt ...
	I0311 23:34:14.765681  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: {Name:mk944b77a3d473faa1140d12b698e32de97614bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.766368  988495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.key ...
	I0311 23:34:14.766387  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.key: {Name:mkd87b9b6815ed534394e6a430c956e9225ec051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:14.766496  988495 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key.c7f339dd
	I0311 23:34:14.766524  988495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt.c7f339dd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0311 23:34:15.084454  988495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt.c7f339dd ...
	I0311 23:34:15.084490  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt.c7f339dd: {Name:mk14e39d0c747891ba10506130fa1fb3b2613998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:15.084668  988495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key.c7f339dd ...
	I0311 23:34:15.084685  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key.c7f339dd: {Name:mk76267a2e374338b7a7c4d07a6313441ed92aef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:15.084772  988495 certs.go:381] copying /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt.c7f339dd -> /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt
	I0311 23:34:15.084871  988495 certs.go:385] copying /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key.c7f339dd -> /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key
	I0311 23:34:15.084925  988495 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.key
	I0311 23:34:15.084950  988495 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.crt with IP's: []
	I0311 23:34:15.694586  988495 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.crt ...
	I0311 23:34:15.694619  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.crt: {Name:mk3b98cb5d09ef1d3e40629997e58b8d95bfd199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:15.695431  988495 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.key ...
	I0311 23:34:15.695450  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.key: {Name:mk09331cff42c7542e8d4ffc81e2128aad112b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:15.695990  988495 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem (1675 bytes)
	I0311 23:34:15.696033  988495 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem (1082 bytes)
	I0311 23:34:15.696064  988495 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem (1123 bytes)
	I0311 23:34:15.696093  988495 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem (1679 bytes)
	I0311 23:34:15.696732  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0311 23:34:15.721550  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0311 23:34:15.746722  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0311 23:34:15.770826  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0311 23:34:15.794319  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0311 23:34:15.821195  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0311 23:34:15.846907  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0311 23:34:15.873346  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0311 23:34:15.902259  988495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0311 23:34:15.927325  988495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0311 23:34:15.945964  988495 ssh_runner.go:195] Run: openssl version
	I0311 23:34:15.951549  988495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0311 23:34:15.961589  988495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0311 23:34:15.965262  988495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0311 23:34:15.965361  988495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0311 23:34:15.972747  988495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0311 23:34:15.982345  988495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0311 23:34:15.985634  988495 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0311 23:34:15.985710  988495 kubeadm.go:391] StartCluster: {Name:addons-340965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-340965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:34:15.985795  988495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0311 23:34:15.985854  988495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0311 23:34:16.029099  988495 cri.go:89] found id: ""
	I0311 23:34:16.029173  988495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0311 23:34:16.038219  988495 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0311 23:34:16.047538  988495 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0311 23:34:16.047629  988495 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0311 23:34:16.057099  988495 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0311 23:34:16.057120  988495 kubeadm.go:156] found existing configuration files:
	
	I0311 23:34:16.057227  988495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0311 23:34:16.066828  988495 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0311 23:34:16.066961  988495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0311 23:34:16.076140  988495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0311 23:34:16.085034  988495 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0311 23:34:16.085126  988495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0311 23:34:16.094085  988495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0311 23:34:16.103130  988495 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0311 23:34:16.103220  988495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0311 23:34:16.111504  988495 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0311 23:34:16.120300  988495 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0311 23:34:16.120397  988495 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0311 23:34:16.128862  988495 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0311 23:34:16.171331  988495 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0311 23:34:16.171391  988495 kubeadm.go:309] [preflight] Running pre-flight checks
	I0311 23:34:16.209989  988495 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0311 23:34:16.210061  988495 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0311 23:34:16.210103  988495 kubeadm.go:309] OS: Linux
	I0311 23:34:16.210159  988495 kubeadm.go:309] CGROUPS_CPU: enabled
	I0311 23:34:16.210209  988495 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0311 23:34:16.210258  988495 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0311 23:34:16.210308  988495 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0311 23:34:16.210357  988495 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0311 23:34:16.210406  988495 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0311 23:34:16.210452  988495 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0311 23:34:16.210502  988495 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0311 23:34:16.210551  988495 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0311 23:34:16.281299  988495 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0311 23:34:16.281412  988495 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0311 23:34:16.281509  988495 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0311 23:34:16.501047  988495 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0311 23:34:16.504963  988495 out.go:204]   - Generating certificates and keys ...
	I0311 23:34:16.505125  988495 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0311 23:34:16.505197  988495 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0311 23:34:16.741756  988495 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0311 23:34:17.061690  988495 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0311 23:34:17.599640  988495 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0311 23:34:17.937926  988495 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0311 23:34:18.308064  988495 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0311 23:34:18.308362  988495 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-340965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 23:34:18.624282  988495 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0311 23:34:18.624598  988495 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-340965 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0311 23:34:20.403455  988495 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0311 23:34:20.591339  988495 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0311 23:34:20.965246  988495 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0311 23:34:20.965454  988495 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0311 23:34:21.244073  988495 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0311 23:34:21.482268  988495 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0311 23:34:22.198585  988495 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0311 23:34:22.916847  988495 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0311 23:34:22.917801  988495 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0311 23:34:22.921208  988495 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0311 23:34:22.923580  988495 out.go:204]   - Booting up control plane ...
	I0311 23:34:22.923681  988495 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0311 23:34:22.923755  988495 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0311 23:34:22.924780  988495 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0311 23:34:22.936775  988495 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0311 23:34:22.937757  988495 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0311 23:34:22.937937  988495 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0311 23:34:23.051781  988495 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0311 23:34:30.551005  988495 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.502386 seconds
	I0311 23:34:30.551125  988495 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0311 23:34:30.566146  988495 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0311 23:34:31.096973  988495 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0311 23:34:31.097163  988495 kubeadm.go:309] [mark-control-plane] Marking the node addons-340965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0311 23:34:31.609414  988495 kubeadm.go:309] [bootstrap-token] Using token: 7z0zx8.iskkrb5ososjjnbs
	I0311 23:34:31.611348  988495 out.go:204]   - Configuring RBAC rules ...
	I0311 23:34:31.611482  988495 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0311 23:34:31.617104  988495 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0311 23:34:31.626827  988495 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0311 23:34:31.631248  988495 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0311 23:34:31.638947  988495 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0311 23:34:31.643375  988495 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0311 23:34:31.656877  988495 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0311 23:34:31.880954  988495 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0311 23:34:32.024511  988495 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0311 23:34:32.026384  988495 kubeadm.go:309] 
	I0311 23:34:32.026458  988495 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0311 23:34:32.026465  988495 kubeadm.go:309] 
	I0311 23:34:32.026539  988495 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0311 23:34:32.026544  988495 kubeadm.go:309] 
	I0311 23:34:32.026568  988495 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0311 23:34:32.027010  988495 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0311 23:34:32.027095  988495 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0311 23:34:32.027102  988495 kubeadm.go:309] 
	I0311 23:34:32.027155  988495 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0311 23:34:32.027160  988495 kubeadm.go:309] 
	I0311 23:34:32.027206  988495 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0311 23:34:32.027213  988495 kubeadm.go:309] 
	I0311 23:34:32.027262  988495 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0311 23:34:32.027347  988495 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0311 23:34:32.027415  988495 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0311 23:34:32.027426  988495 kubeadm.go:309] 
	I0311 23:34:32.027716  988495 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0311 23:34:32.027806  988495 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0311 23:34:32.027816  988495 kubeadm.go:309] 
	I0311 23:34:32.028081  988495 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7z0zx8.iskkrb5ososjjnbs \
	I0311 23:34:32.028189  988495 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:26085ccf8add6982a8091cdb44f048cb24761685c1f5f1e0243eaa1369ddf2e6 \
	I0311 23:34:32.028383  988495 kubeadm.go:309] 	--control-plane 
	I0311 23:34:32.028395  988495 kubeadm.go:309] 
	I0311 23:34:32.028645  988495 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0311 23:34:32.028659  988495 kubeadm.go:309] 
	I0311 23:34:32.028939  988495 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7z0zx8.iskkrb5ososjjnbs \
	I0311 23:34:32.029193  988495 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:26085ccf8add6982a8091cdb44f048cb24761685c1f5f1e0243eaa1369ddf2e6 
	I0311 23:34:32.033086  988495 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0311 23:34:32.033256  988495 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0311 23:34:32.033311  988495 cni.go:84] Creating CNI manager for ""
	I0311 23:34:32.033329  988495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:34:32.037304  988495 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0311 23:34:32.039702  988495 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0311 23:34:32.044067  988495 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0311 23:34:32.044089  988495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0311 23:34:32.071688  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0311 23:34:33.016964  988495 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0311 23:34:33.017091  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:33.017209  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-340965 minikube.k8s.io/updated_at=2024_03_11T23_34_33_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520 minikube.k8s.io/name=addons-340965 minikube.k8s.io/primary=true
	I0311 23:34:33.226916  988495 ops.go:34] apiserver oom_adj: -16
	I0311 23:34:33.227012  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:33.727245  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:34.227197  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:34.727968  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:35.228083  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:35.727727  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:36.227275  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:36.728031  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:37.227139  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:37.727193  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:38.227673  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:38.727180  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:39.228049  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:39.727534  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:40.227846  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:40.727770  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:41.227168  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:41.727175  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:42.228005  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:42.727089  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:43.227361  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:43.727742  988495 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0311 23:34:43.847857  988495 kubeadm.go:1106] duration metric: took 10.830834618s to wait for elevateKubeSystemPrivileges
	W0311 23:34:43.847893  988495 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0311 23:34:43.847901  988495 kubeadm.go:393] duration metric: took 27.862216195s to StartCluster
	I0311 23:34:43.847918  988495 settings.go:142] acquiring lock: {Name:mk66549f73c966ba6f23af9cfb4fef2b1aaf9da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:43.848046  988495 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:34:43.848490  988495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/kubeconfig: {Name:mk502765d2bd81c45b0b0cd22382df706d40c442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0311 23:34:43.848700  988495 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0311 23:34:43.851352  988495 out.go:177] * Verifying Kubernetes components...
	I0311 23:34:43.848833  988495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0311 23:34:43.848997  988495 config.go:182] Loaded profile config "addons-340965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:34:43.849005  988495 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0311 23:34:43.853309  988495 addons.go:69] Setting yakd=true in profile "addons-340965"
	I0311 23:34:43.853334  988495 addons.go:234] Setting addon yakd=true in "addons-340965"
	I0311 23:34:43.853360  988495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0311 23:34:43.853458  988495 addons.go:69] Setting cloud-spanner=true in profile "addons-340965"
	I0311 23:34:43.853364  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.853480  988495 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-340965"
	I0311 23:34:43.853514  988495 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-340965"
	I0311 23:34:43.853532  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.853974  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.853981  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.853476  988495 addons.go:234] Setting addon cloud-spanner=true in "addons-340965"
	I0311 23:34:43.854772  988495 addons.go:69] Setting default-storageclass=true in profile "addons-340965"
	I0311 23:34:43.854807  988495 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-340965"
	I0311 23:34:43.854811  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.855082  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.855289  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.858862  988495 addons.go:69] Setting gcp-auth=true in profile "addons-340965"
	I0311 23:34:43.858917  988495 mustload.go:65] Loading cluster: addons-340965
	I0311 23:34:43.859112  988495 config.go:182] Loaded profile config "addons-340965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:34:43.859586  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.871956  988495 addons.go:69] Setting ingress=true in profile "addons-340965"
	I0311 23:34:43.872056  988495 addons.go:234] Setting addon ingress=true in "addons-340965"
	I0311 23:34:43.872264  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.874145  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.881639  988495 addons.go:69] Setting volumesnapshots=true in profile "addons-340965"
	I0311 23:34:43.883221  988495 addons.go:234] Setting addon volumesnapshots=true in "addons-340965"
	I0311 23:34:43.883261  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.883759  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.894937  988495 addons.go:69] Setting ingress-dns=true in profile "addons-340965"
	I0311 23:34:43.894993  988495 addons.go:234] Setting addon ingress-dns=true in "addons-340965"
	I0311 23:34:43.895145  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.895873  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.918825  988495 addons.go:69] Setting inspektor-gadget=true in profile "addons-340965"
	I0311 23:34:43.918886  988495 addons.go:234] Setting addon inspektor-gadget=true in "addons-340965"
	I0311 23:34:43.918937  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.919646  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.962985  988495 addons.go:69] Setting metrics-server=true in profile "addons-340965"
	I0311 23:34:43.963053  988495 addons.go:234] Setting addon metrics-server=true in "addons-340965"
	I0311 23:34:43.963094  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.963574  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.981962  988495 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-340965"
	I0311 23:34:43.982027  988495 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-340965"
	I0311 23:34:43.982077  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:43.982571  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:43.999400  988495 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0311 23:34:44.001734  988495 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0311 23:34:44.001755  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0311 23:34:44.001823  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.008143  988495 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0311 23:34:43.999200  988495 addons.go:69] Setting storage-provisioner=true in profile "addons-340965"
	I0311 23:34:43.999206  988495 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-340965"
	I0311 23:34:43.999185  988495 addons.go:69] Setting registry=true in profile "addons-340965"
	I0311 23:34:44.008105  988495 addons.go:234] Setting addon default-storageclass=true in "addons-340965"
	I0311 23:34:44.010893  988495 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0311 23:34:44.010918  988495 addons.go:234] Setting addon storage-provisioner=true in "addons-340965"
	I0311 23:34:44.010943  988495 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-340965"
	I0311 23:34:44.010965  988495 addons.go:234] Setting addon registry=true in "addons-340965"
	I0311 23:34:44.012472  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:44.012969  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:44.015388  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0311 23:34:44.015497  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.023122  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:44.023674  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:44.035444  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0311 23:34:44.039668  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0311 23:34:44.042252  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0311 23:34:44.044148  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0311 23:34:44.049548  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0311 23:34:44.051336  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0311 23:34:44.102551  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0311 23:34:44.052260  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:44.092476  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:44.092546  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:44.140846  988495 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0311 23:34:44.193824  988495 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0311 23:34:44.193895  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0311 23:34:44.193987  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.152707  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:44.238114  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.238746  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0311 23:34:44.243824  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0311 23:34:44.243849  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0311 23:34:44.244011  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.238752  988495 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0311 23:34:44.271797  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0311 23:34:44.271823  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0311 23:34:44.271889  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.281387  988495 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0311 23:34:44.239508  988495 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0311 23:34:44.239512  988495 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0311 23:34:44.239516  988495 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0311 23:34:44.263614  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.239502  988495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 23:34:44.284742  988495 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 23:34:44.287496  988495 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-340965"
	I0311 23:34:44.292005  988495 out.go:177]   - Using image docker.io/registry:2.8.3
	I0311 23:34:44.294166  988495 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 23:34:44.294192  988495 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0311 23:34:44.294234  988495 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 23:34:44.294243  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0311 23:34:44.294280  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:44.296168  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:44.296375  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0311 23:34:44.296441  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.307975  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0311 23:34:44.308074  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.311355  988495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 23:34:44.310547  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0311 23:34:44.310621  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.379376  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.394935  988495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0311 23:34:44.397406  988495 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 23:34:44.397431  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0311 23:34:44.397493  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.394896  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.449427  988495 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0311 23:34:44.451459  988495 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0311 23:34:44.451483  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0311 23:34:44.451549  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.464336  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.444582  988495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0311 23:34:44.444637  988495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0311 23:34:44.466497  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.468836  988495 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0311 23:34:44.468855  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0311 23:34:44.468910  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.472052  988495 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0311 23:34:44.476095  988495 out.go:177]   - Using image docker.io/busybox:stable
	I0311 23:34:44.478147  988495 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 23:34:44.478167  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0311 23:34:44.478234  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:44.484097  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.502307  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.530186  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.560792  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.600457  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.622132  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.627550  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.656031  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:44.900371  988495 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0311 23:34:44.900449  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0311 23:34:44.950760  988495 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0311 23:34:44.950783  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0311 23:34:44.990658  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0311 23:34:45.093744  988495 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0311 23:34:45.093777  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0311 23:34:45.266585  988495 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0311 23:34:45.266661  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0311 23:34:45.275604  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0311 23:34:45.275684  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0311 23:34:45.310591  988495 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0311 23:34:45.310670  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0311 23:34:45.442518  988495 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0311 23:34:45.442594  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0311 23:34:45.449842  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0311 23:34:45.462107  988495 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0311 23:34:45.462181  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0311 23:34:45.468990  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0311 23:34:45.536884  988495 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0311 23:34:45.536955  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0311 23:34:45.552348  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0311 23:34:45.573920  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0311 23:34:45.604347  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0311 23:34:45.604374  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0311 23:34:45.614927  988495 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0311 23:34:45.614955  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0311 23:34:45.630249  988495 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0311 23:34:45.630275  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0311 23:34:45.635951  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0311 23:34:45.639233  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0311 23:34:45.642937  988495 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0311 23:34:45.642963  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0311 23:34:45.730717  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0311 23:34:45.730745  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0311 23:34:45.787297  988495 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0311 23:34:45.787365  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0311 23:34:45.883539  988495 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0311 23:34:45.883575  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0311 23:34:45.955850  988495 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0311 23:34:45.955874  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0311 23:34:45.973148  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0311 23:34:45.973227  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0311 23:34:45.996769  988495 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 23:34:45.996842  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0311 23:34:46.018848  988495 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0311 23:34:46.018925  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0311 23:34:46.159363  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0311 23:34:46.215937  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0311 23:34:46.216010  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0311 23:34:46.236842  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0311 23:34:46.317972  988495 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0311 23:34:46.318047  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0311 23:34:46.325244  988495 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0311 23:34:46.325316  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0311 23:34:46.333423  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0311 23:34:46.471105  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0311 23:34:46.471172  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0311 23:34:46.647295  988495 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0311 23:34:46.647338  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0311 23:34:46.669564  988495 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 23:34:46.669589  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0311 23:34:46.805556  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0311 23:34:46.805587  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0311 23:34:46.871993  988495 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 23:34:46.872019  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0311 23:34:46.996371  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 23:34:47.114515  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0311 23:34:47.114543  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0311 23:34:47.178375  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0311 23:34:47.379795  988495 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.914015815s)
	I0311 23:34:47.379840  988495 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0311 23:34:47.380963  988495 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.914885574s)
	I0311 23:34:47.381783  988495 node_ready.go:35] waiting up to 6m0s for node "addons-340965" to be "Ready" ...
	I0311 23:34:47.386363  988495 node_ready.go:49] node "addons-340965" has status "Ready":"True"
	I0311 23:34:47.386387  988495 node_ready.go:38] duration metric: took 4.578982ms for node "addons-340965" to be "Ready" ...
	I0311 23:34:47.386398  988495 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 23:34:47.404902  988495 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cqzb4" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:47.541039  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0311 23:34:47.541073  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0311 23:34:47.786966  988495 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 23:34:47.786994  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0311 23:34:47.883815  988495 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-340965" context rescaled to 1 replicas
	I0311 23:34:47.956089  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0311 23:34:48.408095  988495 pod_ready.go:97] error getting pod "coredns-5dd5756b68-cqzb4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-cqzb4" not found
	I0311 23:34:48.408132  988495 pod_ready.go:81] duration metric: took 1.003194423s for pod "coredns-5dd5756b68-cqzb4" in "kube-system" namespace to be "Ready" ...
	E0311 23:34:48.408144  988495 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-cqzb4" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-cqzb4" not found
	I0311 23:34:48.408152  988495 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:48.756313  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.765569955s)
	I0311 23:34:49.356477  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.906553292s)
	I0311 23:34:49.517330  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.048264127s)
	I0311 23:34:50.415119  988495 pod_ready.go:102] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"False"
	I0311 23:34:50.994725  988495 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0311 23:34:50.994847  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:51.030795  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:51.504366  988495 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0311 23:34:51.620682  988495 addons.go:234] Setting addon gcp-auth=true in "addons-340965"
	I0311 23:34:51.620741  988495 host.go:66] Checking if "addons-340965" exists ...
	I0311 23:34:51.621216  988495 cli_runner.go:164] Run: docker container inspect addons-340965 --format={{.State.Status}}
	I0311 23:34:51.649384  988495 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0311 23:34:51.649445  988495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-340965
	I0311 23:34:51.685619  988495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33902 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/addons-340965/id_rsa Username:docker}
	I0311 23:34:52.415875  988495 pod_ready.go:102] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"False"
	I0311 23:34:52.897269  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.344879916s)
	I0311 23:34:52.897694  988495 addons.go:470] Verifying addon ingress=true in "addons-340965"
	I0311 23:34:52.897311  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.323362098s)
	I0311 23:34:52.897345  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.261371719s)
	I0311 23:34:52.897390  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.258137779s)
	I0311 23:34:52.897420  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.737983657s)
	I0311 23:34:52.900185  988495 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-340965 service yakd-dashboard -n yakd-dashboard
	
	I0311 23:34:52.897502  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.564004842s)
	I0311 23:34:52.897586  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.901180199s)
	I0311 23:34:52.897639  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.719234515s)
	I0311 23:34:52.897449  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.66053406s)
	I0311 23:34:52.902628  988495 addons.go:470] Verifying addon registry=true in "addons-340965"
	I0311 23:34:52.904413  988495 out.go:177] * Verifying registry addon...
	I0311 23:34:52.903019  988495 out.go:177] * Verifying ingress addon...
	I0311 23:34:52.903035  988495 addons.go:470] Verifying addon metrics-server=true in "addons-340965"
	W0311 23:34:52.903071  988495 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 23:34:52.906512  988495 retry.go:31] will retry after 131.589824ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0311 23:34:52.907468  988495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0311 23:34:52.909506  988495 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0311 23:34:52.921302  988495 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0311 23:34:52.921330  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0311 23:34:52.921486  988495 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0311 23:34:52.935053  988495 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0311 23:34:52.935085  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:53.038323  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0311 23:34:53.442991  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:53.451183  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:53.955155  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:53.956182  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:54.427258  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:54.427902  988495 pod_ready.go:102] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"False"
	I0311 23:34:54.428142  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:54.438633  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.482495188s)
	I0311 23:34:54.438680  988495 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-340965"
	I0311 23:34:54.440985  988495 out.go:177] * Verifying csi-hostpath-driver addon...
	I0311 23:34:54.438959  988495 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.789549036s)
	I0311 23:34:54.444475  988495 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0311 23:34:54.446284  988495 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.1
	I0311 23:34:54.443707  988495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0311 23:34:54.448543  988495 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0311 23:34:54.448582  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0311 23:34:54.453943  988495 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0311 23:34:54.454015  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:54.505234  988495 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0311 23:34:54.505301  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0311 23:34:54.556424  988495 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 23:34:54.556450  988495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0311 23:34:54.579143  988495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0311 23:34:54.925177  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:54.925734  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:54.954393  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:55.109979  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.071586391s)
	I0311 23:34:55.414050  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:55.417452  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:55.459461  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:55.649105  988495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.069920985s)
	I0311 23:34:55.652417  988495 addons.go:470] Verifying addon gcp-auth=true in "addons-340965"
	I0311 23:34:55.654705  988495 out.go:177] * Verifying gcp-auth addon...
	I0311 23:34:55.657541  988495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0311 23:34:55.668857  988495 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0311 23:34:55.668933  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:55.916105  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:55.916660  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:55.955531  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:56.161487  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:56.414915  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:56.415049  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:56.454927  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:56.661336  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:56.915367  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:56.916225  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:56.918637  988495 pod_ready.go:102] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"False"
	I0311 23:34:56.955596  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:57.161346  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:57.416213  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:57.416762  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:57.453794  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:57.661663  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:57.920007  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:57.924399  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:57.954528  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:58.162067  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:58.422426  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:58.423488  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:58.454037  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:58.661607  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:58.915398  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:58.915860  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:58.919838  988495 pod_ready.go:102] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"False"
	I0311 23:34:58.954717  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:59.161849  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:59.430773  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:59.432233  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:59.435807  988495 pod_ready.go:92] pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.435832  988495 pod_ready.go:81] duration metric: took 11.02767259s for pod "coredns-5dd5756b68-fjwj5" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.435843  988495 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.445710  988495 pod_ready.go:92] pod "etcd-addons-340965" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.445734  988495 pod_ready.go:81] duration metric: took 9.884063ms for pod "etcd-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.445749  988495 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.455842  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:34:59.459638  988495 pod_ready.go:92] pod "kube-apiserver-addons-340965" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.459667  988495 pod_ready.go:81] duration metric: took 13.910807ms for pod "kube-apiserver-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.459679  988495 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.467723  988495 pod_ready.go:92] pod "kube-controller-manager-addons-340965" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.467750  988495 pod_ready.go:81] duration metric: took 8.063728ms for pod "kube-controller-manager-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.467762  988495 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ct2vp" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.473574  988495 pod_ready.go:92] pod "kube-proxy-ct2vp" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.473601  988495 pod_ready.go:81] duration metric: took 5.830391ms for pod "kube-proxy-ct2vp" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.473612  988495 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.661170  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:34:59.812635  988495 pod_ready.go:92] pod "kube-scheduler-addons-340965" in "kube-system" namespace has status "Ready":"True"
	I0311 23:34:59.812659  988495 pod_ready.go:81] duration metric: took 339.039711ms for pod "kube-scheduler-addons-340965" in "kube-system" namespace to be "Ready" ...
	I0311 23:34:59.812670  988495 pod_ready.go:38] duration metric: took 12.426213533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0311 23:34:59.812685  988495 api_server.go:52] waiting for apiserver process to appear ...
	I0311 23:34:59.812764  988495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 23:34:59.831039  988495 api_server.go:72] duration metric: took 15.982287507s to wait for apiserver process to appear ...
	I0311 23:34:59.831067  988495 api_server.go:88] waiting for apiserver healthz status ...
	I0311 23:34:59.831087  988495 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0311 23:34:59.840000  988495 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0311 23:34:59.841368  988495 api_server.go:141] control plane version: v1.28.4
	I0311 23:34:59.841394  988495 api_server.go:131] duration metric: took 10.320744ms to wait for apiserver health ...
	I0311 23:34:59.841404  988495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0311 23:34:59.916521  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:34:59.916775  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:34:59.955096  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:00.026188  988495 system_pods.go:59] 18 kube-system pods found
	I0311 23:35:00.026546  988495 system_pods.go:61] "coredns-5dd5756b68-fjwj5" [6debb495-afc6-468b-8f3c-537c319a9eaf] Running
	I0311 23:35:00.026674  988495 system_pods.go:61] "csi-hostpath-attacher-0" [c51f7fd3-decc-4d87-8007-72504b2f15b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 23:35:00.026700  988495 system_pods.go:61] "csi-hostpath-resizer-0" [276d3d5b-d1dd-4c90-9e3d-7c56aa5ef7db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 23:35:00.026712  988495 system_pods.go:61] "csi-hostpathplugin-72dmn" [5eb32233-5628-4883-8a03-2f92108874f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 23:35:00.026737  988495 system_pods.go:61] "etcd-addons-340965" [e6f4f2b9-6f45-4620-b5c2-88f46ac09ca0] Running
	I0311 23:35:00.026742  988495 system_pods.go:61] "kindnet-9br8r" [c26f20dc-2c2c-4e07-8254-31bcd7e509c9] Running
	I0311 23:35:00.026748  988495 system_pods.go:61] "kube-apiserver-addons-340965" [49c7dc0f-62aa-4449-907f-904b990aa6f9] Running
	I0311 23:35:00.026753  988495 system_pods.go:61] "kube-controller-manager-addons-340965" [b3a5cf4a-c83f-46c8-af9e-b6ee9123406f] Running
	I0311 23:35:00.026766  988495 system_pods.go:61] "kube-ingress-dns-minikube" [ba5170f1-755e-4820-a169-023f4a889fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 23:35:00.026778  988495 system_pods.go:61] "kube-proxy-ct2vp" [4b1ea99f-e179-4c90-b4d2-a14764c78572] Running
	I0311 23:35:00.026783  988495 system_pods.go:61] "kube-scheduler-addons-340965" [ac683b3d-caac-4361-85ad-473c224659e6] Running
	I0311 23:35:00.026790  988495 system_pods.go:61] "metrics-server-69cf46c98-nsv6v" [c6cf4532-bd58-4c32-91d0-ecc672ae77af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 23:35:00.026799  988495 system_pods.go:61] "nvidia-device-plugin-daemonset-zdvjj" [e64e6a9f-0ea4-4a0a-99d1-b04f1decd16f] Running
	I0311 23:35:00.026808  988495 system_pods.go:61] "registry-proxy-2nf5b" [94842744-4fc5-4400-b1de-06c2a38939d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 23:35:00.026821  988495 system_pods.go:61] "registry-vzlg2" [2b2010b4-72b4-4529-9c91-720efb092e0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0311 23:35:00.026829  988495 system_pods.go:61] "snapshot-controller-58dbcc7b99-k9xcd" [914adf86-d0b3-44d4-8214-5a7f90883d8c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 23:35:00.026866  988495 system_pods.go:61] "snapshot-controller-58dbcc7b99-ppxqd" [18667b31-1e29-4721-871a-907bc2362170] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 23:35:00.026970  988495 system_pods.go:61] "storage-provisioner" [29b88d75-3da6-4bac-8eab-9c7e997a1264] Running
	I0311 23:35:00.026985  988495 system_pods.go:74] duration metric: took 185.575301ms to wait for pod list to return data ...
	I0311 23:35:00.026996  988495 default_sa.go:34] waiting for default service account to be created ...
	I0311 23:35:00.168798  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:00.236972  988495 default_sa.go:45] found service account: "default"
	I0311 23:35:00.237004  988495 default_sa.go:55] duration metric: took 209.996148ms for default service account to be created ...
	I0311 23:35:00.237378  988495 system_pods.go:116] waiting for k8s-apps to be running ...
	I0311 23:35:00.422051  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:00.423257  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:00.430095  988495 system_pods.go:86] 18 kube-system pods found
	I0311 23:35:00.430218  988495 system_pods.go:89] "coredns-5dd5756b68-fjwj5" [6debb495-afc6-468b-8f3c-537c319a9eaf] Running
	I0311 23:35:00.430272  988495 system_pods.go:89] "csi-hostpath-attacher-0" [c51f7fd3-decc-4d87-8007-72504b2f15b1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0311 23:35:00.430310  988495 system_pods.go:89] "csi-hostpath-resizer-0" [276d3d5b-d1dd-4c90-9e3d-7c56aa5ef7db] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0311 23:35:00.430339  988495 system_pods.go:89] "csi-hostpathplugin-72dmn" [5eb32233-5628-4883-8a03-2f92108874f4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0311 23:35:00.430374  988495 system_pods.go:89] "etcd-addons-340965" [e6f4f2b9-6f45-4620-b5c2-88f46ac09ca0] Running
	I0311 23:35:00.430398  988495 system_pods.go:89] "kindnet-9br8r" [c26f20dc-2c2c-4e07-8254-31bcd7e509c9] Running
	I0311 23:35:00.430416  988495 system_pods.go:89] "kube-apiserver-addons-340965" [49c7dc0f-62aa-4449-907f-904b990aa6f9] Running
	I0311 23:35:00.430437  988495 system_pods.go:89] "kube-controller-manager-addons-340965" [b3a5cf4a-c83f-46c8-af9e-b6ee9123406f] Running
	I0311 23:35:00.430460  988495 system_pods.go:89] "kube-ingress-dns-minikube" [ba5170f1-755e-4820-a169-023f4a889fe4] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0311 23:35:00.430492  988495 system_pods.go:89] "kube-proxy-ct2vp" [4b1ea99f-e179-4c90-b4d2-a14764c78572] Running
	I0311 23:35:00.430513  988495 system_pods.go:89] "kube-scheduler-addons-340965" [ac683b3d-caac-4361-85ad-473c224659e6] Running
	I0311 23:35:00.430536  988495 system_pods.go:89] "metrics-server-69cf46c98-nsv6v" [c6cf4532-bd58-4c32-91d0-ecc672ae77af] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0311 23:35:00.430566  988495 system_pods.go:89] "nvidia-device-plugin-daemonset-zdvjj" [e64e6a9f-0ea4-4a0a-99d1-b04f1decd16f] Running
	I0311 23:35:00.430590  988495 system_pods.go:89] "registry-proxy-2nf5b" [94842744-4fc5-4400-b1de-06c2a38939d2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0311 23:35:00.430610  988495 system_pods.go:89] "registry-vzlg2" [2b2010b4-72b4-4529-9c91-720efb092e0c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0311 23:35:00.430631  988495 system_pods.go:89] "snapshot-controller-58dbcc7b99-k9xcd" [914adf86-d0b3-44d4-8214-5a7f90883d8c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 23:35:00.430650  988495 system_pods.go:89] "snapshot-controller-58dbcc7b99-ppxqd" [18667b31-1e29-4721-871a-907bc2362170] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0311 23:35:00.430681  988495 system_pods.go:89] "storage-provisioner" [29b88d75-3da6-4bac-8eab-9c7e997a1264] Running
	I0311 23:35:00.430702  988495 system_pods.go:126] duration metric: took 193.302621ms to wait for k8s-apps to be running ...
	I0311 23:35:00.430723  988495 system_svc.go:44] waiting for kubelet service to be running ....
	I0311 23:35:00.430826  988495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:35:00.455621  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:00.456629  988495 system_svc.go:56] duration metric: took 25.894157ms WaitForService to wait for kubelet
	I0311 23:35:00.456716  988495 kubeadm.go:576] duration metric: took 16.607982868s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0311 23:35:00.456750  988495 node_conditions.go:102] verifying NodePressure condition ...
	I0311 23:35:00.613604  988495 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0311 23:35:00.613694  988495 node_conditions.go:123] node cpu capacity is 2
	I0311 23:35:00.613730  988495 node_conditions.go:105] duration metric: took 156.963088ms to run NodePressure ...
	I0311 23:35:00.613771  988495 start.go:240] waiting for startup goroutines ...
	I0311 23:35:00.662883  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:00.919499  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:00.923935  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:00.955282  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:01.161882  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:01.415550  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:01.418183  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:01.454337  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:01.662850  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:01.918742  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:01.919712  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:01.956995  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:02.162980  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:02.416447  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:02.419573  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:02.456082  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:02.707568  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:02.917496  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:02.918336  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:02.955291  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:03.161321  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:03.415353  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:03.416961  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:03.454027  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:03.661847  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:03.917079  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:03.917899  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:03.960444  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:04.161317  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:04.419155  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:04.420410  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:04.459423  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:04.661013  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:04.914396  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:04.915543  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:04.954157  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:05.161915  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:05.415574  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:05.416557  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:05.456862  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:05.661965  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:05.915719  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:05.917413  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:05.953993  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:06.162069  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:06.413714  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:06.414898  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:06.457034  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:06.661654  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:06.916390  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:06.917771  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:06.954513  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:07.161479  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:07.415726  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:07.417646  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:07.454442  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:07.662261  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:07.916762  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:07.917710  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:07.954956  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:08.162446  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:08.416190  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:08.417039  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:08.454152  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:08.662524  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:08.916460  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:08.916962  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:08.955123  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:09.161744  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:09.415002  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:09.416178  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:09.458485  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:09.661414  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:09.915094  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:09.915342  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:09.954542  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:10.162044  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:10.414806  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:10.415564  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:10.454265  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:10.661563  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:10.915854  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:10.917078  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:10.955886  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:11.161925  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:11.418597  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:11.428682  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:11.456338  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:11.663047  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:11.916865  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:11.918962  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:11.955850  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:12.161760  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:12.417121  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:12.417906  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:12.454984  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:12.662762  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:12.916041  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:12.917695  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:12.957519  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:13.164899  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:13.420205  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:13.421395  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:13.456413  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:13.665066  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:13.919852  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:13.920189  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:13.957735  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:14.166288  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:14.418597  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:14.424142  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:14.454793  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:14.662196  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:14.917114  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:14.917907  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:14.955259  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:15.161564  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:15.414896  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:15.415788  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0311 23:35:15.455235  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:15.662003  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:15.916295  988495 kapi.go:107] duration metric: took 23.008854286s to wait for kubernetes.io/minikube-addons=registry ...
	I0311 23:35:15.917149  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:15.954580  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:16.161517  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:16.414953  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:16.454899  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:16.666967  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:16.914553  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:16.954856  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:17.161589  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:17.413726  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:17.454281  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:17.662023  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:17.914505  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:17.954093  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:18.161797  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:18.414484  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:18.454006  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:18.661514  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:18.914451  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:18.953970  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:19.177066  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:19.415711  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:19.456059  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:19.661847  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:19.915623  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:19.991063  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:20.161464  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:20.415236  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:20.457076  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:20.674563  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:20.915479  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:20.954315  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:21.161988  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:21.414648  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:21.454585  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:21.661381  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:21.914286  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:21.954789  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:22.161935  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:22.414951  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:22.455510  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:22.662507  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:22.914749  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:22.955550  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:23.161625  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:23.414009  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:23.454607  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:23.661375  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:23.914074  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:23.955473  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:24.161681  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:24.414530  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:24.454226  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:24.661817  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:24.914463  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:24.955869  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:25.162106  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:25.414487  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:25.453731  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:25.661846  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:25.914979  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:25.955760  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:26.161986  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:26.426361  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:26.453892  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:26.661306  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:26.914889  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:26.955895  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:27.161740  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:27.414657  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:27.454757  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:27.664493  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:27.914732  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:27.957047  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:28.162216  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:28.414839  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:28.454728  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:28.661962  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:28.915059  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:28.955044  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:29.161755  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:29.414399  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:29.454089  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:29.662066  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:29.914616  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:29.955002  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:30.164624  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:30.414217  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:30.454103  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:30.662010  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:30.914939  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:30.958422  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:31.161390  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:31.413851  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:31.454418  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:31.661558  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:31.914398  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:31.955291  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:32.163344  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:32.415801  988495 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0311 23:35:32.455527  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:32.661586  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:32.914735  988495 kapi.go:107] duration metric: took 40.005213894s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0311 23:35:32.955797  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:33.161559  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:33.454489  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:33.662230  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:33.954742  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:34.161751  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:34.455004  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:34.661404  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:34.955394  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:35.162414  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:35.453814  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:35.662158  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0311 23:35:35.954963  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:36.162045  988495 kapi.go:107] duration metric: took 40.504495584s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0311 23:35:36.164535  988495 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-340965 cluster.
	I0311 23:35:36.166439  988495 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0311 23:35:36.168339  988495 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0311 23:35:36.454491  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:36.954839  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:37.465107  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:37.964961  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:38.454401  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:38.954647  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:39.454333  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:39.961646  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:40.456372  988495 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0311 23:35:40.954593  988495 kapi.go:107] duration metric: took 46.5108839s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0311 23:35:40.956657  988495 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, nvidia-device-plugin, yakd, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0311 23:35:40.958391  988495 addons.go:505] duration metric: took 57.109379757s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner nvidia-device-plugin yakd inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0311 23:35:40.958439  988495 start.go:245] waiting for cluster config update ...
	I0311 23:35:40.958463  988495 start.go:254] writing updated cluster config ...
	I0311 23:35:40.958769  988495 ssh_runner.go:195] Run: rm -f paused
	I0311 23:35:41.289993  988495 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0311 23:35:41.291944  988495 out.go:177] * Done! kubectl is now configured to use "addons-340965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	0947a44f71c20       760b7cbba31e1       4 seconds ago        Running             task-pv-container                        0                   2411023dcab6b       task-pv-pod-restore
	959c7990d40ec       dd1b12fcb6097       8 seconds ago        Exited              hello-world-app                          2                   fa97288cff73c       hello-world-app-5d77478584-8h4dv
	d42f08871e3eb       be5e6f23a9904       33 seconds ago       Running             nginx                                    0                   f9624da74cb31       nginx
	7684759b36db0       ee6d597e62dc8       About a minute ago   Running             csi-snapshotter                          0                   978b8abcd1862       csi-hostpathplugin-72dmn
	9cbd936aa05ec       642ded511e141       About a minute ago   Running             csi-provisioner                          0                   978b8abcd1862       csi-hostpathplugin-72dmn
	89f511328c791       922312104da8a       About a minute ago   Running             liveness-probe                           0                   978b8abcd1862       csi-hostpathplugin-72dmn
	d580251812510       08f6b2990811a       About a minute ago   Running             hostpath                                 0                   978b8abcd1862       csi-hostpathplugin-72dmn
	10889dbfe2105       bafe72500920c       About a minute ago   Running             gcp-auth                                 0                   493f65ab53493       gcp-auth-5f6b4f85fd-bdlj2
	87836fb3e830e       0107d56dbc0be       About a minute ago   Running             node-driver-registrar                    0                   978b8abcd1862       csi-hostpathplugin-72dmn
	4d98fb5ef2188       487fa743e1e22       About a minute ago   Running             csi-resizer                              0                   f3cabcbe33a52       csi-hostpath-resizer-0
	e5934252dde3b       1461903ec4fe9       About a minute ago   Running             csi-external-health-monitor-controller   0                   978b8abcd1862       csi-hostpathplugin-72dmn
	ef2e16bff98ed       9a80d518f102c       About a minute ago   Running             csi-attacher                             0                   902cfe4e58072       csi-hostpath-attacher-0
	846ed977cb855       1a024e390dd05       About a minute ago   Exited              patch                                    0                   61d52eaeed1f4       ingress-nginx-admission-patch-5jfmh
	1b87f9bdd9102       1a024e390dd05       About a minute ago   Exited              create                                   0                   c066302228988       ingress-nginx-admission-create-mdrx7
	4ca9729f4acab       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   9677905292eaa       snapshot-controller-58dbcc7b99-ppxqd
	f04aab95ac613       4d1e5c3e97420       About a minute ago   Running             volume-snapshot-controller               0                   1b921ef5ae2e5       snapshot-controller-58dbcc7b99-k9xcd
	b5cd026458040       20e3f2db01e81       About a minute ago   Running             yakd                                     0                   9c42cfe6d3848       yakd-dashboard-9947fc6bf-8nc7j
	2468ac52ba304       7ce2150c8929b       About a minute ago   Running             local-path-provisioner                   0                   ac3bffd5eab94       local-path-provisioner-78b46b4d5c-9x65w
	70b8fd310c942       41340d5d57adb       About a minute ago   Running             cloud-spanner-emulator                   0                   782f8f5eac24b       cloud-spanner-emulator-6548d5df46-k5g4w
	bd7829712d461       97e04611ad434       About a minute ago   Running             coredns                                  0                   8fecea55f03e2       coredns-5dd5756b68-fjwj5
	fcb909a497232       c0cfb4ce73bda       About a minute ago   Running             nvidia-device-plugin-ctr                 0                   22ebb370ca131       nvidia-device-plugin-daemonset-zdvjj
	172c176bf4ddf       ba04bb24b9575       2 minutes ago        Running             storage-provisioner                      0                   7b85cf6382e7d       storage-provisioner
	f5f9451458c2c       4740c1948d3fc       2 minutes ago        Running             kindnet-cni                              0                   ede4c58fbeba2       kindnet-9br8r
	135491fc814fa       3ca3ca488cf13       2 minutes ago        Running             kube-proxy                               0                   80d470b9383e1       kube-proxy-ct2vp
	18a2cee373fb5       9961cbceaf234       2 minutes ago        Running             kube-controller-manager                  0                   5dc87664faf84       kube-controller-manager-addons-340965
	0ff9c252f0f96       05c284c929889       2 minutes ago        Running             kube-scheduler                           0                   c00aed3c926b7       kube-scheduler-addons-340965
	4616019ce72e8       04b4c447bb9d4       2 minutes ago        Running             kube-apiserver                           0                   71a2c1fb776bd       kube-apiserver-addons-340965
	5c3ee643caba0       9cdd6470f48c8       2 minutes ago        Running             etcd                                     0                   fd6e235d77cec       etcd-addons-340965
	
	
	==> containerd <==
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.025987383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.026009635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.026294812Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2411023dcab6bb1f33a79ad6c79540cf3d59aea4e1298a66ac98b1cfc83df7ab pid=8306 runtime=io.containerd.runc.v2
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.030029282Z" level=info msg="shim disconnected" id=39b9aaa6f64c502c8f96a01ec625a4c0239d55cb451501c7dd5e4017c75321cd
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.030108279Z" level=warning msg="cleaning up after shim disconnected" id=39b9aaa6f64c502c8f96a01ec625a4c0239d55cb451501c7dd5e4017c75321cd namespace=k8s.io
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.030119306Z" level=info msg="cleaning up dead shim"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.045149863Z" level=warning msg="cleanup warnings time=\"2024-03-11T23:36:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8317 runtime=io.containerd.runc.v2\n"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.088728108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:task-pv-pod-restore,Uid:bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7,Namespace:default,Attempt:0,} returns sandbox id \"2411023dcab6bb1f33a79ad6c79540cf3d59aea4e1298a66ac98b1cfc83df7ab\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.093855505Z" level=info msg="PullImage \"docker.io/nginx:latest\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.096397765Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.124842736Z" level=info msg="TearDown network for sandbox \"39b9aaa6f64c502c8f96a01ec625a4c0239d55cb451501c7dd5e4017c75321cd\" successfully"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.124893918Z" level=info msg="StopPodSandbox for \"39b9aaa6f64c502c8f96a01ec625a4c0239d55cb451501c7dd5e4017c75321cd\" returns successfully"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.131891792Z" level=info msg="RemoveContainer for \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.139429285Z" level=info msg="RemoveContainer for \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\" returns successfully"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.140024500Z" level=error msg="ContainerStatus for \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\": not found"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.258262856Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.277212969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.280546229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.284787884Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.289317072Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.291527197Z" level=info msg="PullImage \"docker.io/nginx:latest\" returns image reference \"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.296923174Z" level=info msg="CreateContainer within sandbox \"2411023dcab6bb1f33a79ad6c79540cf3d59aea4e1298a66ac98b1cfc83df7ab\" for container &ContainerMetadata{Name:task-pv-container,Attempt:0,}"
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.320719429Z" level=info msg="CreateContainer within sandbox \"2411023dcab6bb1f33a79ad6c79540cf3d59aea4e1298a66ac98b1cfc83df7ab\" for &ContainerMetadata{Name:task-pv-container,Attempt:0,} returns container id \"0947a44f71c20e470f177abc7806b14cd9cdb823c0dc15b43867e89737e8435e\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.323380684Z" level=info msg="StartContainer for \"0947a44f71c20e470f177abc7806b14cd9cdb823c0dc15b43867e89737e8435e\""
	Mar 11 23:36:48 addons-340965 containerd[761]: time="2024-03-11T23:36:48.375749882Z" level=info msg="StartContainer for \"0947a44f71c20e470f177abc7806b14cd9cdb823c0dc15b43867e89737e8435e\" returns successfully"
	
	
	==> coredns [bd7829712d461c7f56a7a33cd3f62ba8314c25cd15e823763090f96ae67903fd] <==
	[INFO] 10.244.0.19:56149 - 17270 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000068264s
	[INFO] 10.244.0.19:56149 - 16343 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041287s
	[INFO] 10.244.0.19:46421 - 60976 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001838428s
	[INFO] 10.244.0.19:46421 - 8524 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000138712s
	[INFO] 10.244.0.19:56149 - 33928 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00130287s
	[INFO] 10.244.0.19:56149 - 56912 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00106469s
	[INFO] 10.244.0.19:56149 - 19695 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075132s
	[INFO] 10.244.0.19:60404 - 29564 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098508s
	[INFO] 10.244.0.19:43394 - 16149 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100256s
	[INFO] 10.244.0.19:60404 - 58239 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076535s
	[INFO] 10.244.0.19:60404 - 15241 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056113s
	[INFO] 10.244.0.19:43394 - 53798 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069946s
	[INFO] 10.244.0.19:60404 - 53241 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074936s
	[INFO] 10.244.0.19:43394 - 3360 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000105687s
	[INFO] 10.244.0.19:43394 - 59279 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000054596s
	[INFO] 10.244.0.19:60404 - 43800 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044068s
	[INFO] 10.244.0.19:43394 - 54318 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048458s
	[INFO] 10.244.0.19:43394 - 57670 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000071817s
	[INFO] 10.244.0.19:60404 - 40916 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000965814s
	[INFO] 10.244.0.19:60404 - 28502 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001331358s
	[INFO] 10.244.0.19:43394 - 57478 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.005422659s
	[INFO] 10.244.0.19:60404 - 8191 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002214096s
	[INFO] 10.244.0.19:60404 - 10237 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00020623s
	[INFO] 10.244.0.19:43394 - 31986 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001236058s
	[INFO] 10.244.0.19:43394 - 38579 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073729s
	
	
	==> describe nodes <==
	Name:               addons-340965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-340965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=addons-340965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_11T23_34_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-340965
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-340965"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Mar 2024 23:34:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-340965
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Mar 2024 23:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Mar 2024 23:36:34 +0000   Mon, 11 Mar 2024 23:34:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Mar 2024 23:36:34 +0000   Mon, 11 Mar 2024 23:34:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Mar 2024 23:36:34 +0000   Mon, 11 Mar 2024 23:34:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Mar 2024 23:36:34 +0000   Mon, 11 Mar 2024 23:34:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-340965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 b9c5424408ba4572afa8b24d92245076
	  System UUID:                9ad75438-84ee-4fc0-879e-5459e095aff2
	  Boot ID:                    8c314cab-fe64-4f72-b005-d9231ff3e4e9
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-k5g4w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  default                     hello-world-app-5d77478584-8h4dv           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  default                     task-pv-pod-restore                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gcp-auth                    gcp-auth-5f6b4f85fd-bdlj2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 coredns-5dd5756b68-fjwj5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m9s
	  kube-system                 csi-hostpath-attacher-0                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 csi-hostpath-resizer-0                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 csi-hostpathplugin-72dmn                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 etcd-addons-340965                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m21s
	  kube-system                 kindnet-9br8r                              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m9s
	  kube-system                 kube-apiserver-addons-340965               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-controller-manager-addons-340965      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 kube-proxy-ct2vp                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m9s
	  kube-system                 kube-scheduler-addons-340965               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  kube-system                 nvidia-device-plugin-daemonset-zdvjj       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kube-system                 snapshot-controller-58dbcc7b99-k9xcd       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 snapshot-controller-58dbcc7b99-ppxqd       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  local-path-storage          local-path-provisioner-78b46b4d5c-9x65w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-8nc7j             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             348Mi (4%!)(MISSING)  476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m7s                   kube-proxy       
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node addons-340965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node addons-340965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x7 over 2m29s)  kubelet          Node addons-340965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s                  kubelet          Node addons-340965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s                  kubelet          Node addons-340965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s                  kubelet          Node addons-340965 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m21s                  kubelet          Node addons-340965 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m11s                  kubelet          Node addons-340965 status is now: NodeReady
	  Normal  RegisteredNode           2m10s                  node-controller  Node addons-340965 event: Registered Node addons-340965 in Controller
	
	
	==> dmesg <==
	[  +0.001054] FS-Cache: O-key=[8] '0ad4c90000000000'
	[  +0.000807] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=00000000aa1e6556
	[  +0.001026] FS-Cache: N-key=[8] '0ad4c90000000000'
	[  +0.002667] FS-Cache: Duplicate cookie detected
	[  +0.000812] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=00000000896a7b32
	[  +0.001019] FS-Cache: O-key=[8] '0ad4c90000000000'
	[  +0.000802] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=0000000001af3578
	[  +0.001060] FS-Cache: N-key=[8] '0ad4c90000000000'
	[  +3.038727] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000942] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=0000000010edb9e3
	[  +0.001144] FS-Cache: O-key=[8] '09d4c90000000000'
	[  +0.000703] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000908] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=000000008bfe977e
	[  +0.001090] FS-Cache: N-key=[8] '09d4c90000000000'
	[  +0.385783] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000972] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=00000000d8dbd577
	[  +0.001078] FS-Cache: O-key=[8] '0fd4c90000000000'
	[  +0.000789] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001041] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=00000000aa1e6556
	[  +0.001147] FS-Cache: N-key=[8] '0fd4c90000000000'
	
	
	==> etcd [5c3ee643caba0811a90949108de1a5899aea82a43c9e0a2b1fe2544f1160fd08] <==
	{"level":"info","ts":"2024-03-11T23:34:25.565309Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T23:34:25.565334Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T23:34:25.565341Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-11T23:34:25.565762Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T23:34:25.565778Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-11T23:34:25.566131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-11T23:34:25.566201Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-11T23:34:25.653326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-11T23:34:25.653369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-11T23:34:25.653384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-11T23:34:25.653406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-11T23:34:25.653413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T23:34:25.653423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-11T23:34:25.65343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-11T23:34:25.660426Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T23:34:25.66647Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-340965 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-11T23:34:25.666576Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T23:34:25.667611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-11T23:34:25.667655Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-11T23:34:25.668441Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-11T23:34:25.670355Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-11T23:34:25.670698Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-11T23:34:25.692806Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T23:34:25.693196Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-11T23:34:25.69383Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [10889dbfe2105cb8b51014e7089415eaec63e7a299486e85cd702b0a606c9ede] <==
	2024/03/11 23:35:35 GCP Auth Webhook started!
	2024/03/11 23:35:53 Ready to marshal response ...
	2024/03/11 23:35:53 Ready to write response ...
	2024/03/11 23:36:13 Ready to marshal response ...
	2024/03/11 23:36:13 Ready to write response ...
	2024/03/11 23:36:16 Ready to marshal response ...
	2024/03/11 23:36:16 Ready to write response ...
	2024/03/11 23:36:26 Ready to marshal response ...
	2024/03/11 23:36:26 Ready to write response ...
	2024/03/11 23:36:47 Ready to marshal response ...
	2024/03/11 23:36:47 Ready to write response ...
	
	
	==> kernel <==
	 23:36:53 up  4:19,  0 users,  load average: 1.77, 2.69, 2.73
	Linux addons-340965 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [f5f9451458c2c76e40a6f711c25904ebaa66c64fcac00b12f1fcc2944719c035] <==
	I0311 23:34:47.294020       1 main.go:227] handling current node
	I0311 23:34:57.312046       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:34:57.312073       1 main.go:227] handling current node
	I0311 23:35:07.323814       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:07.323844       1 main.go:227] handling current node
	I0311 23:35:17.328355       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:17.328382       1 main.go:227] handling current node
	I0311 23:35:27.341064       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:27.341093       1 main.go:227] handling current node
	I0311 23:35:37.353103       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:37.353131       1 main.go:227] handling current node
	I0311 23:35:47.358560       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:47.358596       1 main.go:227] handling current node
	I0311 23:35:57.371177       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:35:57.371204       1 main.go:227] handling current node
	I0311 23:36:07.383522       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:36:07.383548       1 main.go:227] handling current node
	I0311 23:36:17.395904       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:36:17.395930       1 main.go:227] handling current node
	I0311 23:36:27.408569       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:36:27.408599       1 main.go:227] handling current node
	I0311 23:36:37.412515       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:36:37.412543       1 main.go:227] handling current node
	I0311 23:36:47.416214       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0311 23:36:47.416244       1 main.go:227] handling current node
	
	
	==> kube-apiserver [4616019ce72e8db504d87322b70c37430c2155aeb40d5459a2de1dbbfab6ac5e] <==
	I0311 23:34:52.718498       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0311 23:34:53.503696       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 23:34:54.148188       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.110.103.14"}
	I0311 23:34:54.171686       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0311 23:34:54.291006       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.104.103.188"}
	W0311 23:34:54.785911       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 23:34:55.479225       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.218.60"}
	W0311 23:35:21.237549       1 handler_proxy.go:93] no RequestInfo found in the context
	E0311 23:35:21.237623       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0311 23:35:21.239893       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0311 23:35:21.240038       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.55.32:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.55.32:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.55.32:443: connect: connection refused
	E0311 23:35:21.240709       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.55.32:443/apis/metrics.k8s.io/v1beta1: Get "https://10.102.55.32:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.102.55.32:443: connect: connection refused
	I0311 23:35:21.321304       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 23:35:28.533635       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0311 23:36:10.397596       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0311 23:36:10.411152       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0311 23:36:11.428267       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0311 23:36:16.201739       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0311 23:36:16.628290       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.176.192"}
	I0311 23:36:22.245167       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0311 23:36:25.958246       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0311 23:36:26.475831       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.0.107"}
	
	
	==> kube-controller-manager [18a2cee373fb57cafc463fa1cc1663be3c999ccc921c948963b739fd040feb3d] <==
	I0311 23:36:20.688322       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0311 23:36:26.204654       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0311 23:36:26.221332       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-8h4dv"
	I0311 23:36:26.246816       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="41.990023ms"
	I0311 23:36:26.280369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.496218ms"
	I0311 23:36:26.281178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.244µs"
	I0311 23:36:26.281383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="149.198µs"
	W0311 23:36:27.253125       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 23:36:27.253156       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0311 23:36:28.423666       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 23:36:28.725951       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 23:36:29.022335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="58.607µs"
	I0311 23:36:30.041437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="99.394µs"
	I0311 23:36:31.043131       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.766µs"
	I0311 23:36:43.727040       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0311 23:36:44.842864       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0311 23:36:44.848129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="37.144µs"
	I0311 23:36:44.857968       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0311 23:36:45.167479       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.931214ms"
	I0311 23:36:45.167606       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="92.461µs"
	I0311 23:36:46.146647       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.806406ms"
	I0311 23:36:46.147010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="65.302µs"
	I0311 23:36:46.888187       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W0311 23:36:51.536455       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0311 23:36:51.536487       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [135491fc814fa4741ce2bc3222f64e9eb2f4396f923a168577a8b74c28cc2936] <==
	I0311 23:34:45.470260       1 server_others.go:69] "Using iptables proxy"
	I0311 23:34:45.485550       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0311 23:34:45.550442       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0311 23:34:45.553123       1 server_others.go:152] "Using iptables Proxier"
	I0311 23:34:45.553169       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0311 23:34:45.553178       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0311 23:34:45.553203       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0311 23:34:45.553424       1 server.go:846] "Version info" version="v1.28.4"
	I0311 23:34:45.553438       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0311 23:34:45.554131       1 config.go:188] "Starting service config controller"
	I0311 23:34:45.554166       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0311 23:34:45.554186       1 config.go:97] "Starting endpoint slice config controller"
	I0311 23:34:45.554199       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0311 23:34:45.554869       1 config.go:315] "Starting node config controller"
	I0311 23:34:45.554887       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0311 23:34:45.654241       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0311 23:34:45.654246       1 shared_informer.go:318] Caches are synced for service config
	I0311 23:34:45.655572       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [0ff9c252f0f964ecdece7676107c8107c3155a243f382f51400f05a0d430a1ba] <==
	W0311 23:34:29.237774       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0311 23:34:29.237801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0311 23:34:29.237849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0311 23:34:29.237878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0311 23:34:29.237924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0311 23:34:29.237939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0311 23:34:29.237988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0311 23:34:29.238001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0311 23:34:29.238058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0311 23:34:29.238075       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0311 23:34:29.238135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0311 23:34:29.238150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0311 23:34:29.238251       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0311 23:34:29.238290       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0311 23:34:29.238332       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0311 23:34:29.238362       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0311 23:34:29.238402       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0311 23:34:29.238416       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0311 23:34:29.238462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0311 23:34:29.238477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0311 23:34:29.238561       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0311 23:34:29.238589       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0311 23:34:29.238662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0311 23:34:29.238678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0311 23:34:30.624672       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 11 23:36:47 addons-340965 kubelet[1481]: E0311 23:36:47.661569    1481 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92695f47-f16a-41cd-8611-fc76ebefad86" containerName="task-pv-container"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: E0311 23:36:47.661577    1481 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: E0311 23:36:47.661586    1481 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661640    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661650    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661657    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661665    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="ba5170f1-755e-4820-a169-023f4a889fe4" containerName="minikube-ingress-dns"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661674    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="c09cc488-bafd-4956-84e5-8eb5f1e0653b" containerName="controller"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.661693    1481 memory_manager.go:346] "RemoveStaleState removing state" podUID="92695f47-f16a-41cd-8611-fc76ebefad86" containerName="task-pv-container"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.810772    1481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-00cceddd-e805-428b-8232-0cc11d13a91b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^39c6b5ce-e000-11ee-9354-beb3c896eee5\") pod \"task-pv-pod-restore\" (UID: \"bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7\") " pod="default/task-pv-pod-restore"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.811007    1481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lslks\" (UniqueName: \"kubernetes.io/projected/bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7-kube-api-access-lslks\") pod \"task-pv-pod-restore\" (UID: \"bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7\") " pod="default/task-pv-pod-restore"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.811115    1481 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7-gcp-creds\") pod \"task-pv-pod-restore\" (UID: \"bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7\") " pod="default/task-pv-pod-restore"
	Mar 11 23:36:47 addons-340965 kubelet[1481]: I0311 23:36:47.924747    1481 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-00cceddd-e805-428b-8232-0cc11d13a91b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^39c6b5ce-e000-11ee-9354-beb3c896eee5\") pod \"task-pv-pod-restore\" (UID: \"bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/3d54120f9eca94ee8d926d18886fe0b020b5e225aeeaa7806ae4999410e7f99b/globalmount\"" pod="default/task-pv-pod-restore"
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.130521    1481 scope.go:117] "RemoveContainer" containerID="68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974"
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.139704    1481 scope.go:117] "RemoveContainer" containerID="68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974"
	Mar 11 23:36:48 addons-340965 kubelet[1481]: E0311 23:36:48.140239    1481 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\": not found" containerID="68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974"
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.140295    1481 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974"} err="failed to get container status \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\": rpc error: code = NotFound desc = an error occurred when try to find container \"68a08d05b21edb4290ba6d0def37c0a3fe029f0a8f24f6a71c924489efbe9974\": not found"
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.314450    1481 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c09cc488-bafd-4956-84e5-8eb5f1e0653b-webhook-cert\") pod \"c09cc488-bafd-4956-84e5-8eb5f1e0653b\" (UID: \"c09cc488-bafd-4956-84e5-8eb5f1e0653b\") "
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.314529    1481 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfg46\" (UniqueName: \"kubernetes.io/projected/c09cc488-bafd-4956-84e5-8eb5f1e0653b-kube-api-access-qfg46\") pod \"c09cc488-bafd-4956-84e5-8eb5f1e0653b\" (UID: \"c09cc488-bafd-4956-84e5-8eb5f1e0653b\") "
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.316526    1481 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09cc488-bafd-4956-84e5-8eb5f1e0653b-kube-api-access-qfg46" (OuterVolumeSpecName: "kube-api-access-qfg46") pod "c09cc488-bafd-4956-84e5-8eb5f1e0653b" (UID: "c09cc488-bafd-4956-84e5-8eb5f1e0653b"). InnerVolumeSpecName "kube-api-access-qfg46". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.316978    1481 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09cc488-bafd-4956-84e5-8eb5f1e0653b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c09cc488-bafd-4956-84e5-8eb5f1e0653b" (UID: "c09cc488-bafd-4956-84e5-8eb5f1e0653b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.415103    1481 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qfg46\" (UniqueName: \"kubernetes.io/projected/c09cc488-bafd-4956-84e5-8eb5f1e0653b-kube-api-access-qfg46\") on node \"addons-340965\" DevicePath \"\""
	Mar 11 23:36:48 addons-340965 kubelet[1481]: I0311 23:36:48.415143    1481 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c09cc488-bafd-4956-84e5-8eb5f1e0653b-webhook-cert\") on node \"addons-340965\" DevicePath \"\""
	Mar 11 23:36:49 addons-340965 kubelet[1481]: I0311 23:36:49.157263    1481 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=1.955964262 podCreationTimestamp="2024-03-11 23:36:47 +0000 UTC" firstStartedPulling="2024-03-11 23:36:48.090622312 +0000 UTC m=+136.271075129" lastFinishedPulling="2024-03-11 23:36:48.291877677 +0000 UTC m=+136.472330502" observedRunningTime="2024-03-11 23:36:49.155480043 +0000 UTC m=+137.335932868" watchObservedRunningTime="2024-03-11 23:36:49.157219635 +0000 UTC m=+137.337672460"
	Mar 11 23:36:49 addons-340965 kubelet[1481]: I0311 23:36:49.987287    1481 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c09cc488-bafd-4956-84e5-8eb5f1e0653b" path="/var/lib/kubelet/pods/c09cc488-bafd-4956-84e5-8eb5f1e0653b/volumes"
	
	
	==> storage-provisioner [172c176bf4ddfd4e17972157a6f5c098ba98161290a257f3067698d484fbc7c8] <==
	I0311 23:34:50.217424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0311 23:34:50.266044       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0311 23:34:50.266127       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0311 23:34:50.309780       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0311 23:34:50.309938       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-340965_a2cc863a-2dc3-4f98-80ad-bec63b1df81a!
	I0311 23:34:50.310794       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"29191ba4-54a2-4673-9ede-0ba870e39b89", APIVersion:"v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-340965_a2cc863a-2dc3-4f98-80ad-bec63b1df81a became leader
	I0311 23:34:50.411638       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-340965_a2cc863a-2dc3-4f98-80ad-bec63b1df81a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-340965 -n addons-340965
helpers_test.go:261: (dbg) Run:  kubectl --context addons-340965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr
2024/03/11 23:42:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr: (4.286292179s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-270400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr: (3.549768622s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-270400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.568299241s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-270400
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 image load --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr: (3.100823041s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-270400" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image save gcr.io/google-containers/addon-resizer:functional-270400 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0311 23:43:09.280146 1021053 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:43:09.281506 1021053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:43:09.281524 1021053 out.go:304] Setting ErrFile to fd 2...
	I0311 23:43:09.281530 1021053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:43:09.281830 1021053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:43:09.282645 1021053 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:43:09.282818 1021053 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:43:09.283374 1021053 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
	I0311 23:43:09.299781 1021053 ssh_runner.go:195] Run: systemctl --version
	I0311 23:43:09.299879 1021053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
	I0311 23:43:09.316210 1021053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
	I0311 23:43:09.408129 1021053 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W0311 23:43:09.408186 1021053 cache_images.go:254] Failed to load cached images for profile functional-270400. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I0311 23:43:09.408208 1021053 cache_images.go:262] succeeded pushing to: 
	I0311 23:43:09.408214 1021053 cache_images.go:263] failed pushing to: functional-270400

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-571339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0312 00:20:14.658052  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-571339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.238080962s)

                                                
                                                
-- stdout --
	* [old-k8s-version-571339] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-571339" primary control-plane node in "old-k8s-version-571339" cluster
	* Pulling base image v0.0.42-1708944392-18244 ...
	* Restarting existing docker container for "old-k8s-version-571339" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-571339 addons enable metrics-server
	
	* Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0312 00:19:27.924280 1183642 out.go:291] Setting OutFile to fd 1 ...
	I0312 00:19:27.924506 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:19:27.924518 1183642 out.go:304] Setting ErrFile to fd 2...
	I0312 00:19:27.924524 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:19:27.924768 1183642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0312 00:19:27.925169 1183642 out.go:298] Setting JSON to false
	I0312 00:19:27.926137 1183642 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18116,"bootTime":1710184652,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0312 00:19:27.926215 1183642 start.go:139] virtualization:  
	I0312 00:19:27.930803 1183642 out.go:177] * [old-k8s-version-571339] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0312 00:19:27.933757 1183642 out.go:177]   - MINIKUBE_LOCATION=18358
	I0312 00:19:27.936016 1183642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0312 00:19:27.933825 1183642 notify.go:220] Checking for updates...
	I0312 00:19:27.939834 1183642 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:19:27.942103 1183642 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0312 00:19:27.943914 1183642 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0312 00:19:27.946058 1183642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0312 00:19:27.949050 1183642 config.go:182] Loaded profile config "old-k8s-version-571339": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0312 00:19:27.951626 1183642 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0312 00:19:27.953547 1183642 driver.go:392] Setting default libvirt URI to qemu:///system
	I0312 00:19:27.977831 1183642 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0312 00:19:27.977955 1183642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0312 00:19:28.064538 1183642 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-12 00:19:28.053909653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0312 00:19:28.064659 1183642 docker.go:295] overlay module found
	I0312 00:19:28.067182 1183642 out.go:177] * Using the docker driver based on existing profile
	I0312 00:19:28.072192 1183642 start.go:297] selected driver: docker
	I0312 00:19:28.072216 1183642 start.go:901] validating driver "docker" against &{Name:old-k8s-version-571339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-571339 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:19:28.072343 1183642 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0312 00:19:28.072961 1183642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0312 00:19:28.164679 1183642 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:71 SystemTime:2024-03-12 00:19:28.152268647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0312 00:19:28.165153 1183642 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0312 00:19:28.165212 1183642 cni.go:84] Creating CNI manager for ""
	I0312 00:19:28.165228 1183642 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0312 00:19:28.165296 1183642 start.go:340] cluster config:
	{Name:old-k8s-version-571339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-571339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:19:28.167817 1183642 out.go:177] * Starting "old-k8s-version-571339" primary control-plane node in "old-k8s-version-571339" cluster
	I0312 00:19:28.170083 1183642 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0312 00:19:28.172190 1183642 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0312 00:19:28.174376 1183642 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0312 00:19:28.174453 1183642 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0312 00:19:28.174467 1183642 cache.go:56] Caching tarball of preloaded images
	I0312 00:19:28.174572 1183642 preload.go:173] Found /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0312 00:19:28.174587 1183642 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0312 00:19:28.174725 1183642 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/config.json ...
	I0312 00:19:28.174975 1183642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0312 00:19:28.206957 1183642 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0312 00:19:28.206990 1183642 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0312 00:19:28.207012 1183642 cache.go:194] Successfully downloaded all kic artifacts
	I0312 00:19:28.207053 1183642 start.go:360] acquireMachinesLock for old-k8s-version-571339: {Name:mkef158f105a604472ef9e86e18cdf6a70ef4ce3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:19:28.207146 1183642 start.go:364] duration metric: took 55.613µs to acquireMachinesLock for "old-k8s-version-571339"
	I0312 00:19:28.207178 1183642 start.go:96] Skipping create...Using existing machine configuration
	I0312 00:19:28.207196 1183642 fix.go:54] fixHost starting: 
	I0312 00:19:28.207598 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:28.229618 1183642 fix.go:112] recreateIfNeeded on old-k8s-version-571339: state=Stopped err=<nil>
	W0312 00:19:28.229652 1183642 fix.go:138] unexpected machine state, will restart: <nil>
	I0312 00:19:28.231988 1183642 out.go:177] * Restarting existing docker container for "old-k8s-version-571339" ...
	I0312 00:19:28.233805 1183642 cli_runner.go:164] Run: docker start old-k8s-version-571339
	I0312 00:19:28.627032 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:28.651517 1183642 kic.go:430] container "old-k8s-version-571339" state is running.
	I0312 00:19:28.653571 1183642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-571339
	I0312 00:19:28.677176 1183642 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/config.json ...
	I0312 00:19:28.677431 1183642 machine.go:94] provisionDockerMachine start ...
	I0312 00:19:28.677519 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:28.702685 1183642 main.go:141] libmachine: Using SSH client type: native
	I0312 00:19:28.703494 1183642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I0312 00:19:28.703518 1183642 main.go:141] libmachine: About to run SSH command:
	hostname
	I0312 00:19:28.704764 1183642 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0312 00:19:31.836283 1183642 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-571339
	
	I0312 00:19:31.836310 1183642 ubuntu.go:169] provisioning hostname "old-k8s-version-571339"
	I0312 00:19:31.836386 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:31.857338 1183642 main.go:141] libmachine: Using SSH client type: native
	I0312 00:19:31.857587 1183642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I0312 00:19:31.857598 1183642 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-571339 && echo "old-k8s-version-571339" | sudo tee /etc/hostname
	I0312 00:19:32.010688 1183642 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-571339
	
	I0312 00:19:32.010787 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:32.028650 1183642 main.go:141] libmachine: Using SSH client type: native
	I0312 00:19:32.028903 1183642 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34197 <nil> <nil>}
	I0312 00:19:32.028926 1183642 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-571339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-571339/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-571339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0312 00:19:32.177325 1183642 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0312 00:19:32.177354 1183642 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18358-982285/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-982285/.minikube}
	I0312 00:19:32.177384 1183642 ubuntu.go:177] setting up certificates
	I0312 00:19:32.177396 1183642 provision.go:84] configureAuth start
	I0312 00:19:32.177468 1183642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-571339
	I0312 00:19:32.192670 1183642 provision.go:143] copyHostCerts
	I0312 00:19:32.192746 1183642 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem, removing ...
	I0312 00:19:32.192765 1183642 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem
	I0312 00:19:32.192842 1183642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem (1082 bytes)
	I0312 00:19:32.192952 1183642 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem, removing ...
	I0312 00:19:32.192962 1183642 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem
	I0312 00:19:32.192995 1183642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem (1123 bytes)
	I0312 00:19:32.193059 1183642 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem, removing ...
	I0312 00:19:32.193067 1183642 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem
	I0312 00:19:32.193092 1183642 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem (1679 bytes)
	I0312 00:19:32.193152 1183642 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-571339 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-571339]
	I0312 00:19:32.748994 1183642 provision.go:177] copyRemoteCerts
	I0312 00:19:32.749071 1183642 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0312 00:19:32.749116 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:32.766733 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:32.870089 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0312 00:19:32.897561 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0312 00:19:32.931157 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0312 00:19:32.960805 1183642 provision.go:87] duration metric: took 783.395542ms to configureAuth
	I0312 00:19:32.960835 1183642 ubuntu.go:193] setting minikube options for container-runtime
	I0312 00:19:32.961029 1183642 config.go:182] Loaded profile config "old-k8s-version-571339": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0312 00:19:32.961042 1183642 machine.go:97] duration metric: took 4.283592657s to provisionDockerMachine
	I0312 00:19:32.961049 1183642 start.go:293] postStartSetup for "old-k8s-version-571339" (driver="docker")
	I0312 00:19:32.961060 1183642 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0312 00:19:32.961114 1183642 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0312 00:19:32.961157 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:32.982707 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:33.082502 1183642 ssh_runner.go:195] Run: cat /etc/os-release
	I0312 00:19:33.086901 1183642 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0312 00:19:33.086941 1183642 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0312 00:19:33.086952 1183642 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0312 00:19:33.086959 1183642 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0312 00:19:33.086969 1183642 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/addons for local assets ...
	I0312 00:19:33.087028 1183642 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/files for local assets ...
	I0312 00:19:33.087120 1183642 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem -> 9876862.pem in /etc/ssl/certs
	I0312 00:19:33.087233 1183642 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0312 00:19:33.098729 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem --> /etc/ssl/certs/9876862.pem (1708 bytes)
	I0312 00:19:33.141404 1183642 start.go:296] duration metric: took 180.339073ms for postStartSetup
	I0312 00:19:33.141497 1183642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0312 00:19:33.141542 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:33.187766 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:33.311261 1183642 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0312 00:19:33.317997 1183642 fix.go:56] duration metric: took 5.110802277s for fixHost
	I0312 00:19:33.318022 1183642 start.go:83] releasing machines lock for "old-k8s-version-571339", held for 5.110863658s
	I0312 00:19:33.318100 1183642 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-571339
	I0312 00:19:33.351266 1183642 ssh_runner.go:195] Run: cat /version.json
	I0312 00:19:33.351341 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:33.353924 1183642 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0312 00:19:33.353977 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:33.427590 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:33.433781 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:33.677828 1183642 ssh_runner.go:195] Run: systemctl --version
	I0312 00:19:33.682693 1183642 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0312 00:19:33.688098 1183642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0312 00:19:33.710359 1183642 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0312 00:19:33.710437 1183642 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0312 00:19:33.724954 1183642 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0312 00:19:33.724984 1183642 start.go:494] detecting cgroup driver to use...
	I0312 00:19:33.725017 1183642 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0312 00:19:33.725070 1183642 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0312 00:19:33.750631 1183642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0312 00:19:33.766262 1183642 docker.go:217] disabling cri-docker service (if available) ...
	I0312 00:19:33.766351 1183642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0312 00:19:33.782626 1183642 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0312 00:19:33.797267 1183642 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0312 00:19:33.985612 1183642 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0312 00:19:34.101386 1183642 docker.go:233] disabling docker service ...
	I0312 00:19:34.101456 1183642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0312 00:19:34.114888 1183642 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0312 00:19:34.128343 1183642 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0312 00:19:34.232573 1183642 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0312 00:19:34.346098 1183642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0312 00:19:34.358365 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0312 00:19:34.377113 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0312 00:19:34.388697 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0312 00:19:34.399165 1183642 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0312 00:19:34.399229 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0312 00:19:34.409946 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0312 00:19:34.420764 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0312 00:19:34.431392 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0312 00:19:34.441865 1183642 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0312 00:19:34.451784 1183642 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0312 00:19:34.463271 1183642 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0312 00:19:34.472980 1183642 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0312 00:19:34.482556 1183642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:19:34.595530 1183642 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0312 00:19:34.837578 1183642 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0312 00:19:34.837648 1183642 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0312 00:19:34.844030 1183642 start.go:562] Will wait 60s for crictl version
	I0312 00:19:34.844092 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:19:34.848343 1183642 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0312 00:19:34.935920 1183642 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0312 00:19:34.936059 1183642 ssh_runner.go:195] Run: containerd --version
	I0312 00:19:34.964366 1183642 ssh_runner.go:195] Run: containerd --version
	I0312 00:19:34.999773 1183642 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.6.28 ...
	I0312 00:19:35.002361 1183642 cli_runner.go:164] Run: docker network inspect old-k8s-version-571339 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0312 00:19:35.020656 1183642 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0312 00:19:35.025659 1183642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0312 00:19:35.042393 1183642 kubeadm.go:877] updating cluster {Name:old-k8s-version-571339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-571339 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0312 00:19:35.042518 1183642 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0312 00:19:35.042628 1183642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0312 00:19:35.094948 1183642 containerd.go:612] all images are preloaded for containerd runtime.
	I0312 00:19:35.094971 1183642 containerd.go:519] Images already preloaded, skipping extraction
	I0312 00:19:35.095041 1183642 ssh_runner.go:195] Run: sudo crictl images --output json
	I0312 00:19:35.146471 1183642 containerd.go:612] all images are preloaded for containerd runtime.
	I0312 00:19:35.146493 1183642 cache_images.go:84] Images are preloaded, skipping loading
	I0312 00:19:35.146501 1183642 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0312 00:19:35.146660 1183642 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-571339 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-571339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0312 00:19:35.146730 1183642 ssh_runner.go:195] Run: sudo crictl info
	I0312 00:19:35.193322 1183642 cni.go:84] Creating CNI manager for ""
	I0312 00:19:35.193394 1183642 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0312 00:19:35.193416 1183642 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0312 00:19:35.193451 1183642 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-571339 NodeName:old-k8s-version-571339 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0312 00:19:35.193759 1183642 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-571339"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0312 00:19:35.193874 1183642 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0312 00:19:35.204222 1183642 binaries.go:44] Found k8s binaries, skipping transfer
	I0312 00:19:35.204293 1183642 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0312 00:19:35.213835 1183642 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0312 00:19:35.234130 1183642 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0312 00:19:35.254832 1183642 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0312 00:19:35.273678 1183642 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0312 00:19:35.277716 1183642 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0312 00:19:35.288999 1183642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:19:35.399931 1183642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0312 00:19:35.415821 1183642 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339 for IP: 192.168.76.2
	I0312 00:19:35.415900 1183642 certs.go:194] generating shared ca certs ...
	I0312 00:19:35.415931 1183642 certs.go:226] acquiring lock for ca certs: {Name:mk0a8924146da92e76e9ff4162540f84539e9725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:19:35.416113 1183642 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key
	I0312 00:19:35.416207 1183642 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key
	I0312 00:19:35.416234 1183642 certs.go:256] generating profile certs ...
	I0312 00:19:35.416368 1183642 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.key
	I0312 00:19:35.416474 1183642 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/apiserver.key.01e96a04
	I0312 00:19:35.416554 1183642 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/proxy-client.key
	I0312 00:19:35.416715 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686.pem (1338 bytes)
	W0312 00:19:35.416776 1183642 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686_empty.pem, impossibly tiny 0 bytes
	I0312 00:19:35.416799 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem (1675 bytes)
	I0312 00:19:35.416855 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem (1082 bytes)
	I0312 00:19:35.416915 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem (1123 bytes)
	I0312 00:19:35.416961 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem (1679 bytes)
	I0312 00:19:35.417039 1183642 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem (1708 bytes)
	I0312 00:19:35.417964 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0312 00:19:35.463161 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0312 00:19:35.513407 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0312 00:19:35.562220 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0312 00:19:35.610311 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0312 00:19:35.649554 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0312 00:19:35.706934 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0312 00:19:35.734164 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0312 00:19:35.764765 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686.pem --> /usr/share/ca-certificates/987686.pem (1338 bytes)
	I0312 00:19:35.794696 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem --> /usr/share/ca-certificates/9876862.pem (1708 bytes)
	I0312 00:19:35.825221 1183642 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0312 00:19:35.854490 1183642 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0312 00:19:35.874924 1183642 ssh_runner.go:195] Run: openssl version
	I0312 00:19:35.880869 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/987686.pem && ln -fs /usr/share/ca-certificates/987686.pem /etc/ssl/certs/987686.pem"
	I0312 00:19:35.891073 1183642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/987686.pem
	I0312 00:19:35.895051 1183642 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 23:40 /usr/share/ca-certificates/987686.pem
	I0312 00:19:35.895124 1183642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/987686.pem
	I0312 00:19:35.902374 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/987686.pem /etc/ssl/certs/51391683.0"
	I0312 00:19:35.912271 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9876862.pem && ln -fs /usr/share/ca-certificates/9876862.pem /etc/ssl/certs/9876862.pem"
	I0312 00:19:35.922962 1183642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9876862.pem
	I0312 00:19:35.927834 1183642 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 23:40 /usr/share/ca-certificates/9876862.pem
	I0312 00:19:35.927902 1183642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9876862.pem
	I0312 00:19:35.935387 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9876862.pem /etc/ssl/certs/3ec20f2e.0"
	I0312 00:19:35.946258 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0312 00:19:35.957604 1183642 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:19:35.961834 1183642 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:19:35.961904 1183642 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:19:35.970507 1183642 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0312 00:19:35.981249 1183642 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0312 00:19:35.985360 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0312 00:19:35.993799 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0312 00:19:36.002332 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0312 00:19:36.010909 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0312 00:19:36.020176 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0312 00:19:36.028628 1183642 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0312 00:19:36.036345 1183642 kubeadm.go:391] StartCluster: {Name:old-k8s-version-571339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-571339 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:19:36.036454 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0312 00:19:36.036516 1183642 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0312 00:19:36.089711 1183642 cri.go:89] found id: "91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963"
	I0312 00:19:36.089749 1183642 cri.go:89] found id: "346ceec7d05caebb387531f35c381d5bed2ff6773b755619cd586b41e6efadd8"
	I0312 00:19:36.089754 1183642 cri.go:89] found id: "1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6"
	I0312 00:19:36.089758 1183642 cri.go:89] found id: "8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4"
	I0312 00:19:36.089761 1183642 cri.go:89] found id: "00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9"
	I0312 00:19:36.089765 1183642 cri.go:89] found id: "c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f"
	I0312 00:19:36.089768 1183642 cri.go:89] found id: "ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc"
	I0312 00:19:36.089771 1183642 cri.go:89] found id: "022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5"
	I0312 00:19:36.089775 1183642 cri.go:89] found id: ""
	I0312 00:19:36.089832 1183642 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0312 00:19:36.110477 1183642 cri.go:116] JSON = null
	W0312 00:19:36.110530 1183642 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0312 00:19:36.110615 1183642 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0312 00:19:36.120924 1183642 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0312 00:19:36.120950 1183642 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0312 00:19:36.120956 1183642 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0312 00:19:36.121012 1183642 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0312 00:19:36.130926 1183642 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0312 00:19:36.131453 1183642 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-571339" does not appear in /home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:19:36.131570 1183642 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-982285/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-571339" cluster setting kubeconfig missing "old-k8s-version-571339" context setting]
	I0312 00:19:36.131865 1183642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/kubeconfig: {Name:mk502765d2bd81c45b0b0cd22382df706d40c442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:19:36.133058 1183642 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0312 00:19:36.144065 1183642 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.76.2
	I0312 00:19:36.144103 1183642 kubeadm.go:591] duration metric: took 23.141667ms to restartPrimaryControlPlane
	I0312 00:19:36.144114 1183642 kubeadm.go:393] duration metric: took 107.779355ms to StartCluster
	I0312 00:19:36.144130 1183642 settings.go:142] acquiring lock: {Name:mk66549f73c966ba6f23af9cfb4fef2b1aaf9da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:19:36.144201 1183642 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:19:36.145498 1183642 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/kubeconfig: {Name:mk502765d2bd81c45b0b0cd22382df706d40c442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:19:36.145941 1183642 config.go:182] Loaded profile config "old-k8s-version-571339": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0312 00:19:36.145997 1183642 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0312 00:19:36.152050 1183642 out.go:177] * Verifying Kubernetes components...
	I0312 00:19:36.146062 1183642 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0312 00:19:36.152139 1183642 addons.go:69] Setting dashboard=true in profile "old-k8s-version-571339"
	I0312 00:19:36.152173 1183642 addons.go:234] Setting addon dashboard=true in "old-k8s-version-571339"
	W0312 00:19:36.152189 1183642 addons.go:243] addon dashboard should already be in state true
	I0312 00:19:36.152225 1183642 host.go:66] Checking if "old-k8s-version-571339" exists ...
	I0312 00:19:36.152702 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:36.152831 1183642 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-571339"
	I0312 00:19:36.152875 1183642 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-571339"
	W0312 00:19:36.152887 1183642 addons.go:243] addon storage-provisioner should already be in state true
	I0312 00:19:36.152910 1183642 host.go:66] Checking if "old-k8s-version-571339" exists ...
	I0312 00:19:36.153349 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:36.163590 1183642 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:19:36.156588 1183642 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-571339"
	I0312 00:19:36.163817 1183642 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-571339"
	I0312 00:19:36.164135 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:36.156612 1183642 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-571339"
	I0312 00:19:36.171741 1183642 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-571339"
	W0312 00:19:36.171762 1183642 addons.go:243] addon metrics-server should already be in state true
	I0312 00:19:36.171806 1183642 host.go:66] Checking if "old-k8s-version-571339" exists ...
	I0312 00:19:36.172476 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:36.197296 1183642 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0312 00:19:36.199642 1183642 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0312 00:19:36.201906 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0312 00:19:36.201938 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0312 00:19:36.202041 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:36.234063 1183642 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0312 00:19:36.239981 1183642 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:36.240004 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0312 00:19:36.240071 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:36.248415 1183642 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0312 00:19:36.250300 1183642 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0312 00:19:36.250324 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0312 00:19:36.250393 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:36.287089 1183642 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-571339"
	W0312 00:19:36.287120 1183642 addons.go:243] addon default-storageclass should already be in state true
	I0312 00:19:36.287152 1183642 host.go:66] Checking if "old-k8s-version-571339" exists ...
	I0312 00:19:36.287414 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:36.288385 1183642 cli_runner.go:164] Run: docker container inspect old-k8s-version-571339 --format={{.State.Status}}
	I0312 00:19:36.333745 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:36.343002 1183642 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:36.343030 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0312 00:19:36.343100 1183642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-571339
	I0312 00:19:36.344780 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:36.375420 1183642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34197 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/old-k8s-version-571339/id_rsa Username:docker}
	I0312 00:19:36.420313 1183642 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0312 00:19:36.474219 1183642 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-571339" to be "Ready" ...
	I0312 00:19:36.520332 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0312 00:19:36.520359 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0312 00:19:36.563236 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0312 00:19:36.563263 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0312 00:19:36.584863 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:36.614870 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0312 00:19:36.614944 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0312 00:19:36.629502 1183642 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0312 00:19:36.629574 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0312 00:19:36.654139 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:36.712198 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0312 00:19:36.712274 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0312 00:19:36.764027 1183642 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0312 00:19:36.764101 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0312 00:19:36.797773 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0312 00:19:36.797847 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0312 00:19:36.848350 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:36.848448 1183642 retry.go:31] will retry after 198.882566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:36.866505 1183642 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:19:36.866584 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0312 00:19:36.891353 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0312 00:19:36.891425 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0312 00:19:36.924977 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0312 00:19:36.926395 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:36.926506 1183642 retry.go:31] will retry after 284.598578ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:36.965267 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0312 00:19:36.965344 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0312 00:19:37.047660 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.047743 1183642 retry.go:31] will retry after 279.598784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.047883 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:37.080665 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0312 00:19:37.080741 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0312 00:19:37.156913 1183642 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:19:37.156977 1183642 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0312 00:19:37.178706 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.178794 1183642 retry.go:31] will retry after 224.522162ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.193433 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:19:37.211694 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0312 00:19:37.327233 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.327331 1183642 retry.go:31] will retry after 237.821499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.328276 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:19:37.403626 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0312 00:19:37.458822 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.458903 1183642 retry.go:31] will retry after 189.923605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:37.565087 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.565170 1183642 retry.go:31] will retry after 362.412934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.565318 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0312 00:19:37.573897 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.573999 1183642 retry.go:31] will retry after 481.80373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.649857 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0312 00:19:37.685564 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.685643 1183642 retry.go:31] will retry after 239.083993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:37.760513 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.760602 1183642 retry.go:31] will retry after 736.185082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:37.925163 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:19:37.927878 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:19:38.056778 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0312 00:19:38.116957 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.117002 1183642 retry.go:31] will retry after 826.110547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:38.153910 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.153958 1183642 retry.go:31] will retry after 425.811006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:38.234309 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.234343 1183642 retry.go:31] will retry after 642.858099ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.475169 1183642 node_ready.go:53] error getting node "old-k8s-version-571339": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-571339": dial tcp 192.168.76.2:8443: connect: connection refused
	I0312 00:19:38.497398 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:38.580831 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0312 00:19:38.590106 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.590152 1183642 retry.go:31] will retry after 620.915493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:38.688659 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.688693 1183642 retry.go:31] will retry after 537.815514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.878094 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:38.943421 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0312 00:19:38.979888 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:38.979922 1183642 retry.go:31] will retry after 1.462265632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:39.074285 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:39.074320 1183642 retry.go:31] will retry after 966.348537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:39.211629 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:39.227128 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0312 00:19:39.382054 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:39.382100 1183642 retry.go:31] will retry after 1.390812722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:39.416043 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:39.416077 1183642 retry.go:31] will retry after 1.193780421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.041051 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0312 00:19:40.145032 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.145098 1183642 retry.go:31] will retry after 1.627252687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.442468 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:40.475403 1183642 node_ready.go:53] error getting node "old-k8s-version-571339": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-571339": dial tcp 192.168.76.2:8443: connect: connection refused
	W0312 00:19:40.543695 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.543778 1183642 retry.go:31] will retry after 2.278359957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.610071 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0312 00:19:40.714211 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.714245 1183642 retry.go:31] will retry after 1.31764435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.773599 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0312 00:19:40.868635 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:40.868664 1183642 retry.go:31] will retry after 1.095904929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:41.773092 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0312 00:19:41.870567 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:41.870603 1183642 retry.go:31] will retry after 1.765202948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:41.964930 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:42.032705 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0312 00:19:42.086008 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:42.086049 1183642 retry.go:31] will retry after 2.445331579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0312 00:19:42.198776 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:42.198816 1183642 retry.go:31] will retry after 3.514217165s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:42.475572 1183642 node_ready.go:53] error getting node "old-k8s-version-571339": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-571339": dial tcp 192.168.76.2:8443: connect: connection refused
	I0312 00:19:42.823169 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0312 00:19:42.922240 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:42.922272 1183642 retry.go:31] will retry after 4.024510858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:43.636025 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0312 00:19:43.734056 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:43.734094 1183642 retry.go:31] will retry after 2.622307387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:44.532005 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0312 00:19:44.774635 1183642 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:44.774666 1183642 retry.go:31] will retry after 4.26181263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0312 00:19:44.975360 1183642 node_ready.go:53] error getting node "old-k8s-version-571339": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-571339": dial tcp 192.168.76.2:8443: connect: connection refused
	I0312 00:19:45.713798 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:19:46.357041 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:19:46.946937 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:19:49.037650 1183642 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:19:56.630305 1183642 node_ready.go:49] node "old-k8s-version-571339" has status "Ready":"True"
	I0312 00:19:56.630331 1183642 node_ready.go:38] duration metric: took 20.156023974s for node "old-k8s-version-571339" to be "Ready" ...
	I0312 00:19:56.630341 1183642 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0312 00:19:56.992516 1183642 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-pd7cs" in "kube-system" namespace to be "Ready" ...
	I0312 00:19:57.122827 1183642 pod_ready.go:92] pod "coredns-74ff55c5b-pd7cs" in "kube-system" namespace has status "Ready":"True"
	I0312 00:19:57.122904 1183642 pod_ready.go:81] duration metric: took 130.298802ms for pod "coredns-74ff55c5b-pd7cs" in "kube-system" namespace to be "Ready" ...
	I0312 00:19:57.122930 1183642 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:19:57.203973 1183642 pod_ready.go:92] pod "etcd-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:19:57.204049 1183642 pod_ready.go:81] duration metric: took 81.098773ms for pod "etcd-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:19:57.204077 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:19:58.890393 1183642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.176549984s)
	I0312 00:19:58.890460 1183642 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-571339"
	I0312 00:19:59.038960 1183642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.681872642s)
	I0312 00:19:59.040915 1183642 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-571339 addons enable metrics-server
	
	I0312 00:19:59.039114 1183642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.092144s)
	I0312 00:19:59.039180 1183642 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.001506048s)
	I0312 00:19:59.056365 1183642 out.go:177] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I0312 00:19:59.058375 1183642 addons.go:505] duration metric: took 22.9122948s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0312 00:19:59.214769 1183642 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:01.712824 1183642 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:04.210681 1183642 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:06.210987 1183642 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:20:06.211014 1183642 pod_ready.go:81] duration metric: took 9.006916676s for pod "kube-apiserver-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:20:06.211026 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:20:08.217646 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:10.718546 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:12.725138 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:15.220888 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:17.719016 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:19.719148 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:21.770787 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:24.218434 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:26.724512 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:29.218890 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:31.227402 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:33.718093 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:36.218085 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:38.219444 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:40.718284 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:42.718531 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:45.217852 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:47.220802 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:49.718150 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:52.223566 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:54.718119 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:56.719178 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:59.220911 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:01.719534 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:03.227170 1183642 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.227190 1183642 pod_ready.go:81] duration metric: took 57.016156506s for pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.227202 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvrz6" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.233214 1183642 pod_ready.go:92] pod "kube-proxy-tvrz6" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.233236 1183642 pod_ready.go:81] duration metric: took 6.026509ms for pod "kube-proxy-tvrz6" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.233252 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:05.239789 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:07.740120 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:10.240027 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:12.739407 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:14.739905 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:17.239412 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:19.240306 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:21.741209 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:23.741756 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:25.243129 1183642 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:25.243227 1183642 pod_ready.go:81] duration metric: took 22.009965136s for pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:25.243292 1183642 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:27.249922 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:29.253629 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:31.750359 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:34.249496 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:36.250802 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:38.250948 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:40.754158 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:43.249095 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:45.250392 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:47.749722 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:49.749754 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:51.750403 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:54.250385 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:56.770534 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:59.249206 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:01.258864 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:03.749523 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:05.750110 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:08.250519 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:10.750288 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:13.250136 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:15.256797 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:17.749938 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:20.249074 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:22.249939 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:24.748800 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:26.749624 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:29.253144 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:31.749968 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:34.250397 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:36.749482 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:39.249893 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:41.749896 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:44.249675 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:46.749855 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:49.249664 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:51.249976 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:53.250606 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:55.750464 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:58.249257 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:00.267356 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:02.750532 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:05.250606 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:07.750716 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:10.251242 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:12.749463 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:15.252504 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:17.749439 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:19.750446 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:22.249233 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:24.750135 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:27.249368 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:29.751700 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:32.252338 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:34.749267 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:36.750479 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:39.250553 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:41.750376 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:43.750574 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:46.249404 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:48.250459 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:50.259454 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:52.750308 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:55.250042 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:57.750235 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:00.267118 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:02.748886 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:04.749475 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:07.250678 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:09.749225 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:11.750512 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:14.250329 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:16.749575 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:19.250281 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:21.749352 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:23.749394 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:25.749579 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:27.750348 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:30.249873 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:32.249981 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:34.749339 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:37.249723 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:39.249953 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:41.749722 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:43.750163 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:46.249711 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:48.249848 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:50.250297 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:52.753532 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:55.251518 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:57.749656 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:59.749784 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:01.750995 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:04.251144 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:06.749259 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:08.749400 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:10.750185 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:13.249560 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:15.249809 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:17.250310 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:19.749835 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:21.749927 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:23.750869 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:25.249870 1183642 pod_ready.go:81] duration metric: took 4m0.006519105s for pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace to be "Ready" ...
	E0312 00:25:25.249898 1183642 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0312 00:25:25.249907 1183642 pod_ready.go:38] duration metric: took 5m28.619543264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0312 00:25:25.256852 1183642 api_server.go:52] waiting for apiserver process to appear ...
	I0312 00:25:25.256936 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0312 00:25:25.257019 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0312 00:25:25.300453 1183642 cri.go:89] found id: "e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372"
	I0312 00:25:25.300515 1183642 cri.go:89] found id: "022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5"
	I0312 00:25:25.300533 1183642 cri.go:89] found id: ""
	I0312 00:25:25.300551 1183642 logs.go:276] 2 containers: [e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372 022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5]
	I0312 00:25:25.300638 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.304205 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.307725 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0312 00:25:25.307813 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0312 00:25:25.344547 1183642 cri.go:89] found id: "227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086"
	I0312 00:25:25.344570 1183642 cri.go:89] found id: "00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9"
	I0312 00:25:25.344576 1183642 cri.go:89] found id: ""
	I0312 00:25:25.344583 1183642 logs.go:276] 2 containers: [227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086 00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9]
	I0312 00:25:25.344658 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.348234 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.351686 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0312 00:25:25.351805 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0312 00:25:25.393110 1183642 cri.go:89] found id: "0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097"
	I0312 00:25:25.393134 1183642 cri.go:89] found id: "91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963"
	I0312 00:25:25.393139 1183642 cri.go:89] found id: ""
	I0312 00:25:25.393146 1183642 logs.go:276] 2 containers: [0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097 91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963]
	I0312 00:25:25.393203 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.396999 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.400568 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0312 00:25:25.400686 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0312 00:25:25.439058 1183642 cri.go:89] found id: "cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9"
	I0312 00:25:25.439084 1183642 cri.go:89] found id: "c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f"
	I0312 00:25:25.439089 1183642 cri.go:89] found id: ""
	I0312 00:25:25.439096 1183642 logs.go:276] 2 containers: [cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9 c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f]
	I0312 00:25:25.439197 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.443086 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.446900 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0312 00:25:25.446974 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0312 00:25:25.488093 1183642 cri.go:89] found id: "46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e"
	I0312 00:25:25.488164 1183642 cri.go:89] found id: "8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4"
	I0312 00:25:25.488181 1183642 cri.go:89] found id: ""
	I0312 00:25:25.488196 1183642 logs.go:276] 2 containers: [46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e 8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4]
	I0312 00:25:25.488286 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.492007 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.495477 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0312 00:25:25.495557 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0312 00:25:25.536694 1183642 cri.go:89] found id: "ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7"
	I0312 00:25:25.536759 1183642 cri.go:89] found id: "ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc"
	I0312 00:25:25.536771 1183642 cri.go:89] found id: ""
	I0312 00:25:25.536779 1183642 logs.go:276] 2 containers: [ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7 ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc]
	I0312 00:25:25.536850 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.540733 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.544497 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0312 00:25:25.544627 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0312 00:25:25.581655 1183642 cri.go:89] found id: "ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0"
	I0312 00:25:25.581679 1183642 cri.go:89] found id: "1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6"
	I0312 00:25:25.581684 1183642 cri.go:89] found id: ""
	I0312 00:25:25.581705 1183642 logs.go:276] 2 containers: [ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0 1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6]
	I0312 00:25:25.581770 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.585569 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.589471 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0312 00:25:25.589550 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0312 00:25:25.630552 1183642 cri.go:89] found id: "98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879"
	I0312 00:25:25.630576 1183642 cri.go:89] found id: "0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86"
	I0312 00:25:25.630581 1183642 cri.go:89] found id: ""
	I0312 00:25:25.630589 1183642 logs.go:276] 2 containers: [98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879 0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86]
	I0312 00:25:25.630648 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.634419 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.637992 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0312 00:25:25.638081 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0312 00:25:25.684057 1183642 cri.go:89] found id: "cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec"
	I0312 00:25:25.684120 1183642 cri.go:89] found id: ""
	I0312 00:25:25.684132 1183642 logs.go:276] 1 containers: [cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec]
	I0312 00:25:25.684207 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.688044 1183642 logs.go:123] Gathering logs for kube-apiserver [e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372] ...
	I0312 00:25:25.688071 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372"
	I0312 00:25:25.746459 1183642 logs.go:123] Gathering logs for kube-apiserver [022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5] ...
	I0312 00:25:25.746492 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5"
	I0312 00:25:25.807669 1183642 logs.go:123] Gathering logs for coredns [0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097] ...
	I0312 00:25:25.807705 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097"
	I0312 00:25:25.858063 1183642 logs.go:123] Gathering logs for kube-scheduler [cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9] ...
	I0312 00:25:25.858091 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9"
	I0312 00:25:25.900479 1183642 logs.go:123] Gathering logs for kube-proxy [8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4] ...
	I0312 00:25:25.900511 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4"
	I0312 00:25:25.946460 1183642 logs.go:123] Gathering logs for kindnet [1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6] ...
	I0312 00:25:25.946491 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6"
	I0312 00:25:25.992941 1183642 logs.go:123] Gathering logs for kubernetes-dashboard [cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec] ...
	I0312 00:25:25.992975 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec"
	I0312 00:25:26.047245 1183642 logs.go:123] Gathering logs for describe nodes ...
	I0312 00:25:26.047275 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0312 00:25:26.245179 1183642 logs.go:123] Gathering logs for coredns [91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963] ...
	I0312 00:25:26.245208 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963"
	I0312 00:25:26.286685 1183642 logs.go:123] Gathering logs for kube-proxy [46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e] ...
	I0312 00:25:26.286717 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e"
	I0312 00:25:26.326586 1183642 logs.go:123] Gathering logs for kindnet [ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0] ...
	I0312 00:25:26.326619 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0"
	I0312 00:25:26.371019 1183642 logs.go:123] Gathering logs for storage-provisioner [0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86] ...
	I0312 00:25:26.371048 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86"
	I0312 00:25:26.437329 1183642 logs.go:123] Gathering logs for etcd [227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086] ...
	I0312 00:25:26.437363 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086"
	I0312 00:25:26.491828 1183642 logs.go:123] Gathering logs for kube-scheduler [c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f] ...
	I0312 00:25:26.491858 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f"
	I0312 00:25:26.534467 1183642 logs.go:123] Gathering logs for storage-provisioner [98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879] ...
	I0312 00:25:26.534502 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879"
	I0312 00:25:26.573116 1183642 logs.go:123] Gathering logs for containerd ...
	I0312 00:25:26.573190 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0312 00:25:26.637555 1183642 logs.go:123] Gathering logs for container status ...
	I0312 00:25:26.637593 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0312 00:25:26.690245 1183642 logs.go:123] Gathering logs for kubelet ...
	I0312 00:25:26.690283 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0312 00:25:26.757014 1183642 logs.go:138] Found kubelet problem: Mar 12 00:19:58 old-k8s-version-571339 kubelet[663]: E0312 00:19:58.352250     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.757217 1183642 logs.go:138] Found kubelet problem: Mar 12 00:19:58 old-k8s-version-571339 kubelet[663]: E0312 00:19:58.560744     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.760035 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:13 old-k8s-version-571339 kubelet[663]: E0312 00:20:13.145527     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.760786 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:15 old-k8s-version-571339 kubelet[663]: E0312 00:20:15.137936     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-5nnmb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-5nnmb" is forbidden: User "system:node:old-k8s-version-571339" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-571339' and this object
	W0312 00:25:26.764232 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:27 old-k8s-version-571339 kubelet[663]: E0312 00:20:27.691697     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.764435 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:28 old-k8s-version-571339 kubelet[663]: E0312 00:20:28.142650     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.764781 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:28 old-k8s-version-571339 kubelet[663]: E0312 00:20:28.687196     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.765236 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:29 old-k8s-version-571339 kubelet[663]: E0312 00:20:29.694628     663 pod_workers.go:191] Error syncing pod c73ffc75-b4a0-4184-80c3-a73e21cc954e ("storage-provisioner_kube-system(c73ffc75-b4a0-4184-80c3-a73e21cc954e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c73ffc75-b4a0-4184-80c3-a73e21cc954e)"
	W0312 00:25:26.765570 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:30 old-k8s-version-571339 kubelet[663]: E0312 00:20:30.101093     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.766572 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:42 old-k8s-version-571339 kubelet[663]: E0312 00:20:42.735991     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.769242 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:43 old-k8s-version-571339 kubelet[663]: E0312 00:20:43.152194     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.769586 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:50 old-k8s-version-571339 kubelet[663]: E0312 00:20:50.100788     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.769781 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:54 old-k8s-version-571339 kubelet[663]: E0312 00:20:54.136340     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.770396 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:03 old-k8s-version-571339 kubelet[663]: E0312 00:21:03.794154     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.770580 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:07 old-k8s-version-571339 kubelet[663]: E0312 00:21:07.135168     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.770905 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:10 old-k8s-version-571339 kubelet[663]: E0312 00:21:10.101222     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.771102 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:22 old-k8s-version-571339 kubelet[663]: E0312 00:21:22.136421     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.771444 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:25 old-k8s-version-571339 kubelet[663]: E0312 00:21:25.135960     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.773913 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:33 old-k8s-version-571339 kubelet[663]: E0312 00:21:33.143768     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.774266 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:40 old-k8s-version-571339 kubelet[663]: E0312 00:21:40.134926     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.774456 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:45 old-k8s-version-571339 kubelet[663]: E0312 00:21:45.143152     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.775048 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:51 old-k8s-version-571339 kubelet[663]: E0312 00:21:51.956550     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.775388 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:00 old-k8s-version-571339 kubelet[663]: E0312 00:22:00.104419     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.775575 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:00 old-k8s-version-571339 kubelet[663]: E0312 00:22:00.145154     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.775901 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:15 old-k8s-version-571339 kubelet[663]: E0312 00:22:15.136079     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.776087 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:15 old-k8s-version-571339 kubelet[663]: E0312 00:22:15.136820     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.776271 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:28 old-k8s-version-571339 kubelet[663]: E0312 00:22:28.135086     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.776603 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:30 old-k8s-version-571339 kubelet[663]: E0312 00:22:30.140098     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.776928 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:42 old-k8s-version-571339 kubelet[663]: E0312 00:22:42.135063     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.777116 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:43 old-k8s-version-571339 kubelet[663]: E0312 00:22:43.135158     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.779571 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:54 old-k8s-version-571339 kubelet[663]: E0312 00:22:54.143460     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.779896 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:56 old-k8s-version-571339 kubelet[663]: E0312 00:22:56.134757     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.780081 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:06 old-k8s-version-571339 kubelet[663]: E0312 00:23:06.135565     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.780408 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:07 old-k8s-version-571339 kubelet[663]: E0312 00:23:07.134751     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.780994 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:19 old-k8s-version-571339 kubelet[663]: E0312 00:23:19.156710     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.781319 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:20 old-k8s-version-571339 kubelet[663]: E0312 00:23:20.160925     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.781505 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:21 old-k8s-version-571339 kubelet[663]: E0312 00:23:21.135408     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.781832 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:32 old-k8s-version-571339 kubelet[663]: E0312 00:23:32.135536     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.782019 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:35 old-k8s-version-571339 kubelet[663]: E0312 00:23:35.135123     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.782353 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:45 old-k8s-version-571339 kubelet[663]: E0312 00:23:45.134915     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.782536 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:47 old-k8s-version-571339 kubelet[663]: E0312 00:23:47.135061     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.782860 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:59 old-k8s-version-571339 kubelet[663]: E0312 00:23:59.134729     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.783047 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:02 old-k8s-version-571339 kubelet[663]: E0312 00:24:02.135523     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.783378 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:11 old-k8s-version-571339 kubelet[663]: E0312 00:24:11.134786     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.783568 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:16 old-k8s-version-571339 kubelet[663]: E0312 00:24:16.135140     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.783898 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:22 old-k8s-version-571339 kubelet[663]: E0312 00:24:22.135430     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.784081 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:31 old-k8s-version-571339 kubelet[663]: E0312 00:24:31.135197     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.784405 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:35 old-k8s-version-571339 kubelet[663]: E0312 00:24:35.134852     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.784588 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:43 old-k8s-version-571339 kubelet[663]: E0312 00:24:43.135594     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.784918 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:47 old-k8s-version-571339 kubelet[663]: E0312 00:24:47.134779     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.785104 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:56 old-k8s-version-571339 kubelet[663]: E0312 00:24:56.137730     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.785429 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.785615 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.785969 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.786154 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.786482 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	I0312 00:25:26.786493 1183642 logs.go:123] Gathering logs for dmesg ...
	I0312 00:25:26.786508 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0312 00:25:26.806149 1183642 logs.go:123] Gathering logs for etcd [00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9] ...
	I0312 00:25:26.806190 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9"
	I0312 00:25:26.874399 1183642 logs.go:123] Gathering logs for kube-controller-manager [ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7] ...
	I0312 00:25:26.874432 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7"
	I0312 00:25:26.948646 1183642 logs.go:123] Gathering logs for kube-controller-manager [ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc] ...
	I0312 00:25:26.948680 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc"
	I0312 00:25:27.046898 1183642 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:27.046932 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0312 00:25:27.047003 1183642 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0312 00:25:27.047015 1183642 out.go:239]   Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	  Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:27.047022 1183642 out.go:239]   Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:27.047037 1183642 out.go:239]   Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	  Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:27.047048 1183642 out.go:239]   Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:27.047056 1183642 out.go:239]   Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	  Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	I0312 00:25:27.047062 1183642 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:27.047068 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:25:37.048428 1183642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0312 00:25:37.061991 1183642 api_server.go:72] duration metric: took 6m0.915958249s to wait for apiserver process to appear ...
	I0312 00:25:37.062023 1183642 api_server.go:88] waiting for apiserver healthz status ...
	I0312 00:25:37.064575 1183642 out.go:177] 
	W0312 00:25:37.066356 1183642 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0312 00:25:37.066381 1183642 out.go:239] * 
	* 
	W0312 00:25:37.067988 1183642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0312 00:25:37.069878 1183642 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-571339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-571339
helpers_test.go:235: (dbg) docker inspect old-k8s-version-571339:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad",
	        "Created": "2024-03-12T00:16:37.69329513Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1183859,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-12T00:19:28.61880223Z",
	            "FinishedAt": "2024-03-12T00:19:27.247079461Z"
	        },
	        "Image": "sha256:4a9b65157dd7fb2ddb7cb7afe975b3dc288e9877c60d13613a69dd41a70e2e4e",
	        "ResolvConfPath": "/var/lib/docker/containers/d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad/hosts",
	        "LogPath": "/var/lib/docker/containers/d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad/d0ed924bcf70c0c265e040ae66ad87355f5e6410c4af11b74451f1ecd317edad-json.log",
	        "Name": "/old-k8s-version-571339",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-571339:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-571339",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/799efc588e8bca8b36d282e78e117c4163fcff8287fd5d3f16770fdde78849b3-init/diff:/var/lib/docker/overlay2/af090fb944a3b68787e040c2e3137e8bdfd21b050bcd01e191acaa1449d77a1d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/799efc588e8bca8b36d282e78e117c4163fcff8287fd5d3f16770fdde78849b3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/799efc588e8bca8b36d282e78e117c4163fcff8287fd5d3f16770fdde78849b3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/799efc588e8bca8b36d282e78e117c4163fcff8287fd5d3f16770fdde78849b3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-571339",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-571339/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-571339",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-571339",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-571339",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2efb7bf6160d6e6bd33829a98d866eb4709fb58faffbc748362ac583077de47c",
	            "SandboxKey": "/var/run/docker/netns/2efb7bf6160d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34196"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-571339": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d0ed924bcf70",
	                        "old-k8s-version-571339"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "7dab5417b71482a2fa0e3f51504a41f5c656a79e603a13811e111600dad1650c",
	                    "EndpointID": "6672f67923e99fa5d955745822f6951448e20a20383c19c0dc97f889ec0f59cb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-571339",
	                        "d0ed924bcf70"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-571339 -n old-k8s-version-571339
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-571339 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-571339 logs -n 25: (2.28202625s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-399839 sudo                                  | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC |                     |
	|         | containerd config dump                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-399839 sudo                                  | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC |                     |
	|         | systemctl status crio --all                            |                          |         |         |                     |                     |
	|         | --full --no-pager                                      |                          |         |         |                     |                     |
	| ssh     | -p cilium-399839 sudo                                  | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-399839 sudo find                             | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                          |         |         |                     |                     |
	| ssh     | -p cilium-399839 sudo crio                             | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC |                     |
	|         | config                                                 |                          |         |         |                     |                     |
	| delete  | -p cilium-399839                                       | cilium-399839            | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC | 12 Mar 24 00:15 UTC |
	| start   | -p cert-expiration-627308                              | cert-expiration-627308   | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC | 12 Mar 24 00:16 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-074205                               | force-systemd-env-074205 | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC | 12 Mar 24 00:15 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-074205                            | force-systemd-env-074205 | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC | 12 Mar 24 00:15 UTC |
	| start   | -p cert-options-844120                                 | cert-options-844120      | jenkins | v1.32.0 | 12 Mar 24 00:15 UTC | 12 Mar 24 00:16 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-844120 ssh                                | cert-options-844120      | jenkins | v1.32.0 | 12 Mar 24 00:16 UTC | 12 Mar 24 00:16 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-844120 -- sudo                         | cert-options-844120      | jenkins | v1.32.0 | 12 Mar 24 00:16 UTC | 12 Mar 24 00:16 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-844120                                 | cert-options-844120      | jenkins | v1.32.0 | 12 Mar 24 00:16 UTC | 12 Mar 24 00:16 UTC |
	| start   | -p old-k8s-version-571339                              | old-k8s-version-571339   | jenkins | v1.32.0 | 12 Mar 24 00:16 UTC | 12 Mar 24 00:19 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-627308                              | cert-expiration-627308   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:19 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-627308                              | cert-expiration-627308   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:19 UTC |
	| start   | -p no-preload-820117                                   | no-preload-820117        | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:20 UTC |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-571339        | old-k8s-version-571339   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-571339                              | old-k8s-version-571339   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:19 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-571339             | old-k8s-version-571339   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC | 12 Mar 24 00:19 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-571339                              | old-k8s-version-571339   | jenkins | v1.32.0 | 12 Mar 24 00:19 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-820117             | no-preload-820117        | jenkins | v1.32.0 | 12 Mar 24 00:20 UTC | 12 Mar 24 00:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-820117                                   | no-preload-820117        | jenkins | v1.32.0 | 12 Mar 24 00:20 UTC | 12 Mar 24 00:20 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-820117                  | no-preload-820117        | jenkins | v1.32.0 | 12 Mar 24 00:20 UTC | 12 Mar 24 00:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-820117                                   | no-preload-820117        | jenkins | v1.32.0 | 12 Mar 24 00:20 UTC |                     |
	|         | --memory=2200 --alsologtostderr                        |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/12 00:20:50
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0312 00:20:50.541954 1188932 out.go:291] Setting OutFile to fd 1 ...
	I0312 00:20:50.542135 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:20:50.542219 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:20:50.542235 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:20:50.542599 1188932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0312 00:20:50.543046 1188932 out.go:298] Setting JSON to false
	I0312 00:20:50.544420 1188932 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":18199,"bootTime":1710184652,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0312 00:20:50.544504 1188932 start.go:139] virtualization:  
	I0312 00:20:50.547384 1188932 out.go:177] * [no-preload-820117] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0312 00:20:50.549904 1188932 out.go:177]   - MINIKUBE_LOCATION=18358
	I0312 00:20:50.552171 1188932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0312 00:20:50.550060 1188932 notify.go:220] Checking for updates...
	I0312 00:20:50.555668 1188932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:20:50.557452 1188932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0312 00:20:50.559152 1188932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0312 00:20:50.560775 1188932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0312 00:20:50.562988 1188932 config.go:182] Loaded profile config "no-preload-820117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0312 00:20:50.563569 1188932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0312 00:20:50.586673 1188932 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0312 00:20:50.586789 1188932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0312 00:20:50.658543 1188932 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-12 00:20:50.64889652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0312 00:20:50.658663 1188932 docker.go:295] overlay module found
	I0312 00:20:50.660696 1188932 out.go:177] * Using the docker driver based on existing profile
	I0312 00:20:50.662660 1188932 start.go:297] selected driver: docker
	I0312 00:20:50.662678 1188932 start.go:901] validating driver "docker" against &{Name:no-preload-820117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-820117 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:20:50.662790 1188932 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0312 00:20:50.663699 1188932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0312 00:20:50.721141 1188932 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-12 00:20:50.710655071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0312 00:20:50.721491 1188932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0312 00:20:50.721553 1188932 cni.go:84] Creating CNI manager for ""
	I0312 00:20:50.721568 1188932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0312 00:20:50.721613 1188932 start.go:340] cluster config:
	{Name:no-preload-820117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-820117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:20:50.724785 1188932 out.go:177] * Starting "no-preload-820117" primary control-plane node in "no-preload-820117" cluster
	I0312 00:20:50.726676 1188932 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0312 00:20:50.728514 1188932 out.go:177] * Pulling base image v0.0.42-1708944392-18244 ...
	I0312 00:20:50.730603 1188932 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0312 00:20:50.730690 1188932 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0312 00:20:50.730803 1188932 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/config.json ...
	I0312 00:20:50.731196 1188932 cache.go:107] acquiring lock: {Name:mkf92ea0c89d4c9524d1be4c946c4329ff83f832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731302 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0312 00:20:50.731418 1188932 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 226.899µs
	I0312 00:20:50.731431 1188932 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0312 00:20:50.731446 1188932 cache.go:107] acquiring lock: {Name:mk4f885f5af3bd070a6064f366370a31e2783607 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731496 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0312 00:20:50.731506 1188932 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 63.137µs
	I0312 00:20:50.731513 1188932 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0312 00:20:50.731528 1188932 cache.go:107] acquiring lock: {Name:mk605a8ec10e82852869543fe9055ed85de43697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731668 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0312 00:20:50.731684 1188932 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 156.698µs
	I0312 00:20:50.731744 1188932 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0312 00:20:50.731768 1188932 cache.go:107] acquiring lock: {Name:mk04bd77379cac93c500d24284ab41dbdff0ba22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731821 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0312 00:20:50.731816 1188932 cache.go:107] acquiring lock: {Name:mk1efd3e8a093653b7f7789b238a2bb8793c57c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731828 1188932 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 63.285µs
	I0312 00:20:50.731849 1188932 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0312 00:20:50.731860 1188932 cache.go:107] acquiring lock: {Name:mk6161219273dc520579db8d426cf678fbbc9882 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731885 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0312 00:20:50.731895 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0312 00:20:50.731907 1188932 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 48.606µs
	I0312 00:20:50.731915 1188932 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0312 00:20:50.731908 1188932 cache.go:107] acquiring lock: {Name:mk0f67a5c72f21c67d3c044b68201bb27f19ccd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731894 1188932 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 87.333µs
	I0312 00:20:50.731932 1188932 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0312 00:20:50.731943 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0312 00:20:50.731949 1188932 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 42.805µs
	I0312 00:20:50.731947 1188932 cache.go:107] acquiring lock: {Name:mk93316922e2d696693cfb2ed02a67004963bb8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.731956 1188932 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0312 00:20:50.732104 1188932 cache.go:115] /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0312 00:20:50.732120 1188932 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 174.436µs
	I0312 00:20:50.732127 1188932 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0312 00:20:50.732155 1188932 cache.go:87] Successfully saved all images to host disk.
	I0312 00:20:50.747997 1188932 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon, skipping pull
	I0312 00:20:50.748025 1188932 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in daemon, skipping load
	I0312 00:20:50.748047 1188932 cache.go:194] Successfully downloaded all kic artifacts
	I0312 00:20:50.748075 1188932 start.go:360] acquireMachinesLock for no-preload-820117: {Name:mkf37752eca2fd3272417c72bab17c9ea390482e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0312 00:20:50.748139 1188932 start.go:364] duration metric: took 41.722µs to acquireMachinesLock for "no-preload-820117"
	I0312 00:20:50.748164 1188932 start.go:96] Skipping create...Using existing machine configuration
	I0312 00:20:50.748170 1188932 fix.go:54] fixHost starting: 
	I0312 00:20:50.748474 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:50.765598 1188932 fix.go:112] recreateIfNeeded on no-preload-820117: state=Stopped err=<nil>
	W0312 00:20:50.765630 1188932 fix.go:138] unexpected machine state, will restart: <nil>
	I0312 00:20:50.769145 1188932 out.go:177] * Restarting existing docker container for "no-preload-820117" ...
	I0312 00:20:49.718150 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:52.223566 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:50.771173 1188932 cli_runner.go:164] Run: docker start no-preload-820117
	I0312 00:20:51.156543 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:51.180309 1188932 kic.go:430] container "no-preload-820117" state is running.
	I0312 00:20:51.180709 1188932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820117
	I0312 00:20:51.202838 1188932 profile.go:142] Saving config to /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/config.json ...
	I0312 00:20:51.203062 1188932 machine.go:94] provisionDockerMachine start ...
	I0312 00:20:51.203118 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:51.232865 1188932 main.go:141] libmachine: Using SSH client type: native
	I0312 00:20:51.233141 1188932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I0312 00:20:51.233151 1188932 main.go:141] libmachine: About to run SSH command:
	hostname
	I0312 00:20:51.233908 1188932 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0312 00:20:54.367519 1188932 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820117
	
	I0312 00:20:54.367541 1188932 ubuntu.go:169] provisioning hostname "no-preload-820117"
	I0312 00:20:54.367608 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:54.383745 1188932 main.go:141] libmachine: Using SSH client type: native
	I0312 00:20:54.384002 1188932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I0312 00:20:54.384020 1188932 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820117 && echo "no-preload-820117" | sudo tee /etc/hostname
	I0312 00:20:54.527831 1188932 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820117
	
	I0312 00:20:54.527945 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:54.545529 1188932 main.go:141] libmachine: Using SSH client type: native
	I0312 00:20:54.545788 1188932 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1ca0] 0x3e4500 <nil>  [] 0s} 127.0.0.1 34202 <nil> <nil>}
	I0312 00:20:54.545810 1188932 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820117/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0312 00:20:54.675514 1188932 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0312 00:20:54.675540 1188932 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18358-982285/.minikube CaCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18358-982285/.minikube}
	I0312 00:20:54.675561 1188932 ubuntu.go:177] setting up certificates
	I0312 00:20:54.675572 1188932 provision.go:84] configureAuth start
	I0312 00:20:54.675652 1188932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820117
	I0312 00:20:54.694260 1188932 provision.go:143] copyHostCerts
	I0312 00:20:54.694339 1188932 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem, removing ...
	I0312 00:20:54.694356 1188932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem
	I0312 00:20:54.694434 1188932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/key.pem (1679 bytes)
	I0312 00:20:54.694562 1188932 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem, removing ...
	I0312 00:20:54.694574 1188932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem
	I0312 00:20:54.694601 1188932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/ca.pem (1082 bytes)
	I0312 00:20:54.694671 1188932 exec_runner.go:144] found /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem, removing ...
	I0312 00:20:54.694688 1188932 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem
	I0312 00:20:54.694715 1188932 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18358-982285/.minikube/cert.pem (1123 bytes)
	I0312 00:20:54.694781 1188932 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem org=jenkins.no-preload-820117 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820117]
	I0312 00:20:55.033567 1188932 provision.go:177] copyRemoteCerts
	I0312 00:20:55.033679 1188932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0312 00:20:55.033743 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:55.054507 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:55.153333 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0312 00:20:55.180376 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0312 00:20:55.208912 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0312 00:20:55.235445 1188932 provision.go:87] duration metric: took 559.859948ms to configureAuth
	I0312 00:20:55.235471 1188932 ubuntu.go:193] setting minikube options for container-runtime
	I0312 00:20:55.235703 1188932 config.go:182] Loaded profile config "no-preload-820117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0312 00:20:55.235718 1188932 machine.go:97] duration metric: took 4.032647509s to provisionDockerMachine
	I0312 00:20:55.235726 1188932 start.go:293] postStartSetup for "no-preload-820117" (driver="docker")
	I0312 00:20:55.235739 1188932 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0312 00:20:55.235795 1188932 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0312 00:20:55.235849 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:55.252078 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:55.344473 1188932 ssh_runner.go:195] Run: cat /etc/os-release
	I0312 00:20:55.347582 1188932 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0312 00:20:55.347658 1188932 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0312 00:20:55.347684 1188932 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0312 00:20:55.347706 1188932 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0312 00:20:55.347729 1188932 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/addons for local assets ...
	I0312 00:20:55.347801 1188932 filesync.go:126] Scanning /home/jenkins/minikube-integration/18358-982285/.minikube/files for local assets ...
	I0312 00:20:55.347935 1188932 filesync.go:149] local asset: /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem -> 9876862.pem in /etc/ssl/certs
	I0312 00:20:55.348062 1188932 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0312 00:20:55.356594 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem --> /etc/ssl/certs/9876862.pem (1708 bytes)
	I0312 00:20:55.381509 1188932 start.go:296] duration metric: took 145.766296ms for postStartSetup
	I0312 00:20:55.381633 1188932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0312 00:20:55.381679 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:55.400821 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:55.492164 1188932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0312 00:20:55.496564 1188932 fix.go:56] duration metric: took 4.74838679s for fixHost
	I0312 00:20:55.496588 1188932 start.go:83] releasing machines lock for "no-preload-820117", held for 4.7484367s
	I0312 00:20:55.496657 1188932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820117
	I0312 00:20:55.512626 1188932 ssh_runner.go:195] Run: cat /version.json
	I0312 00:20:55.512679 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:55.512913 1188932 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0312 00:20:55.512957 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:55.532428 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:55.543131 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:55.626680 1188932 ssh_runner.go:195] Run: systemctl --version
	I0312 00:20:55.759795 1188932 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0312 00:20:55.764145 1188932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0312 00:20:55.782847 1188932 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0312 00:20:55.782921 1188932 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0312 00:20:55.791772 1188932 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0312 00:20:55.791796 1188932 start.go:494] detecting cgroup driver to use...
	I0312 00:20:55.791830 1188932 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0312 00:20:55.791880 1188932 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0312 00:20:55.805586 1188932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0312 00:20:55.817874 1188932 docker.go:217] disabling cri-docker service (if available) ...
	I0312 00:20:55.817946 1188932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0312 00:20:55.834402 1188932 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0312 00:20:55.846680 1188932 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0312 00:20:55.934928 1188932 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0312 00:20:56.026248 1188932 docker.go:233] disabling docker service ...
	I0312 00:20:56.026376 1188932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0312 00:20:56.044738 1188932 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0312 00:20:56.057287 1188932 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0312 00:20:56.150693 1188932 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0312 00:20:56.266398 1188932 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0312 00:20:56.281870 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0312 00:20:56.299043 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0312 00:20:56.310000 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0312 00:20:56.320172 1188932 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0312 00:20:56.320298 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0312 00:20:56.330897 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0312 00:20:56.341273 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0312 00:20:56.352132 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0312 00:20:56.362649 1188932 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0312 00:20:56.372445 1188932 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0312 00:20:56.384235 1188932 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0312 00:20:56.394172 1188932 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0312 00:20:56.403535 1188932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:20:56.509745 1188932 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0312 00:20:56.710905 1188932 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0312 00:20:56.710977 1188932 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0312 00:20:56.716899 1188932 start.go:562] Will wait 60s for crictl version
	I0312 00:20:56.716962 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:20:56.721946 1188932 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0312 00:20:56.761680 1188932 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.28
	RuntimeApiVersion:  v1
	I0312 00:20:56.761754 1188932 ssh_runner.go:195] Run: containerd --version
	I0312 00:20:56.787297 1188932 ssh_runner.go:195] Run: containerd --version
	I0312 00:20:56.811807 1188932 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.6.28 ...
	I0312 00:20:56.814611 1188932 cli_runner.go:164] Run: docker network inspect no-preload-820117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0312 00:20:56.832211 1188932 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0312 00:20:56.836483 1188932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0312 00:20:56.847610 1188932 kubeadm.go:877] updating cluster {Name:no-preload-820117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-820117 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0312 00:20:56.847735 1188932 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0312 00:20:56.847788 1188932 ssh_runner.go:195] Run: sudo crictl images --output json
	I0312 00:20:56.885727 1188932 containerd.go:612] all images are preloaded for containerd runtime.
	I0312 00:20:56.885749 1188932 cache_images.go:84] Images are preloaded, skipping loading
	I0312 00:20:56.885757 1188932 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.29.0-rc.2 containerd true true} ...
	I0312 00:20:56.885861 1188932 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-820117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-820117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0312 00:20:56.885930 1188932 ssh_runner.go:195] Run: sudo crictl info
	I0312 00:20:56.922106 1188932 cni.go:84] Creating CNI manager for ""
	I0312 00:20:56.922134 1188932 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0312 00:20:56.922145 1188932 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0312 00:20:56.922167 1188932 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820117 NodeName:no-preload-820117 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0312 00:20:56.922298 1188932 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-820117"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0312 00:20:56.922368 1188932 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0312 00:20:56.932378 1188932 binaries.go:44] Found k8s binaries, skipping transfer
	I0312 00:20:56.932452 1188932 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0312 00:20:56.941363 1188932 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0312 00:20:56.960725 1188932 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0312 00:20:56.980006 1188932 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0312 00:20:56.999645 1188932 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0312 00:20:57.005884 1188932 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0312 00:20:57.017793 1188932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:20:57.106623 1188932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0312 00:20:57.123629 1188932 certs.go:68] Setting up /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117 for IP: 192.168.85.2
	I0312 00:20:57.123655 1188932 certs.go:194] generating shared ca certs ...
	I0312 00:20:57.123671 1188932 certs.go:226] acquiring lock for ca certs: {Name:mk0a8924146da92e76e9ff4162540f84539e9725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:20:57.123895 1188932 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key
	I0312 00:20:57.123968 1188932 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key
	I0312 00:20:57.123984 1188932 certs.go:256] generating profile certs ...
	I0312 00:20:57.124104 1188932 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.key
	I0312 00:20:57.124254 1188932 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/apiserver.key.268b8182
	I0312 00:20:57.124330 1188932 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/proxy-client.key
	I0312 00:20:57.124488 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686.pem (1338 bytes)
	W0312 00:20:57.124544 1188932 certs.go:480] ignoring /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686_empty.pem, impossibly tiny 0 bytes
	I0312 00:20:57.124558 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca-key.pem (1675 bytes)
	I0312 00:20:57.124588 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/ca.pem (1082 bytes)
	I0312 00:20:57.124648 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/cert.pem (1123 bytes)
	I0312 00:20:57.124692 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/certs/key.pem (1679 bytes)
	I0312 00:20:57.124771 1188932 certs.go:484] found cert: /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem (1708 bytes)
	I0312 00:20:57.125559 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0312 00:20:57.156483 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0312 00:20:57.184039 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0312 00:20:57.211140 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0312 00:20:57.238842 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0312 00:20:57.273175 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0312 00:20:57.314602 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0312 00:20:57.349458 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0312 00:20:57.379205 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0312 00:20:57.414532 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/certs/987686.pem --> /usr/share/ca-certificates/987686.pem (1338 bytes)
	I0312 00:20:57.442947 1188932 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/ssl/certs/9876862.pem --> /usr/share/ca-certificates/9876862.pem (1708 bytes)
	I0312 00:20:57.472613 1188932 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0312 00:20:57.494421 1188932 ssh_runner.go:195] Run: openssl version
	I0312 00:20:57.503121 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9876862.pem && ln -fs /usr/share/ca-certificates/9876862.pem /etc/ssl/certs/9876862.pem"
	I0312 00:20:57.513927 1188932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9876862.pem
	I0312 00:20:57.518112 1188932 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 11 23:40 /usr/share/ca-certificates/9876862.pem
	I0312 00:20:57.518204 1188932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9876862.pem
	I0312 00:20:57.525289 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9876862.pem /etc/ssl/certs/3ec20f2e.0"
	I0312 00:20:57.535124 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0312 00:20:57.545222 1188932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:20:57.549439 1188932 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 11 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:20:57.549510 1188932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0312 00:20:57.556906 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0312 00:20:57.566292 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/987686.pem && ln -fs /usr/share/ca-certificates/987686.pem /etc/ssl/certs/987686.pem"
	I0312 00:20:57.576440 1188932 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/987686.pem
	I0312 00:20:57.580172 1188932 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 11 23:40 /usr/share/ca-certificates/987686.pem
	I0312 00:20:57.580268 1188932 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/987686.pem
	I0312 00:20:57.587370 1188932 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/987686.pem /etc/ssl/certs/51391683.0"
	I0312 00:20:57.596971 1188932 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0312 00:20:57.600754 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0312 00:20:57.607803 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0312 00:20:57.614831 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0312 00:20:57.621850 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0312 00:20:57.629155 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0312 00:20:57.636263 1188932 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0312 00:20:57.643278 1188932 kubeadm.go:391] StartCluster: {Name:no-preload-820117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-820117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0312 00:20:57.643425 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0312 00:20:57.643511 1188932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0312 00:20:57.686858 1188932 cri.go:89] found id: "84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:20:57.686882 1188932 cri.go:89] found id: "9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:20:57.686887 1188932 cri.go:89] found id: "dcde5d00729bb4753633d3244c211c58c0f9f4746661a0859e35488e957c7617"
	I0312 00:20:57.686899 1188932 cri.go:89] found id: "98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:20:57.686903 1188932 cri.go:89] found id: "c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:20:57.686907 1188932 cri.go:89] found id: "e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:20:57.686910 1188932 cri.go:89] found id: "4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:20:57.686913 1188932 cri.go:89] found id: "1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:20:57.686916 1188932 cri.go:89] found id: ""
	I0312 00:20:57.686969 1188932 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0312 00:20:57.699859 1188932 cri.go:116] JSON = null
	W0312 00:20:57.699907 1188932 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0312 00:20:57.699969 1188932 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0312 00:20:57.709442 1188932 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0312 00:20:57.709463 1188932 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0312 00:20:57.709471 1188932 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0312 00:20:57.709532 1188932 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0312 00:20:57.721537 1188932 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0312 00:20:57.722163 1188932 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-820117" does not appear in /home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:20:57.722438 1188932 kubeconfig.go:62] /home/jenkins/minikube-integration/18358-982285/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-820117" cluster setting kubeconfig missing "no-preload-820117" context setting]
	I0312 00:20:57.722929 1188932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/kubeconfig: {Name:mk502765d2bd81c45b0b0cd22382df706d40c442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:20:57.724384 1188932 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0312 00:20:57.735340 1188932 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0312 00:20:57.735374 1188932 kubeadm.go:591] duration metric: took 25.897381ms to restartPrimaryControlPlane
	I0312 00:20:57.735384 1188932 kubeadm.go:393] duration metric: took 92.124741ms to StartCluster
	I0312 00:20:57.735401 1188932 settings.go:142] acquiring lock: {Name:mk66549f73c966ba6f23af9cfb4fef2b1aaf9da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:20:57.735463 1188932 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:20:57.736483 1188932 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18358-982285/kubeconfig: {Name:mk502765d2bd81c45b0b0cd22382df706d40c442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0312 00:20:57.736708 1188932 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0312 00:20:57.740656 1188932 out.go:177] * Verifying Kubernetes components...
	I0312 00:20:57.737002 1188932 config.go:182] Loaded profile config "no-preload-820117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0312 00:20:57.737013 1188932 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0312 00:20:57.740702 1188932 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820117"
	I0312 00:20:57.740719 1188932 addons.go:69] Setting dashboard=true in profile "no-preload-820117"
	I0312 00:20:57.740742 1188932 addons.go:234] Setting addon storage-provisioner=true in "no-preload-820117"
	W0312 00:20:57.740750 1188932 addons.go:243] addon storage-provisioner should already be in state true
	I0312 00:20:57.740756 1188932 addons.go:234] Setting addon dashboard=true in "no-preload-820117"
	W0312 00:20:57.740764 1188932 addons.go:243] addon dashboard should already be in state true
	I0312 00:20:57.740781 1188932 host.go:66] Checking if "no-preload-820117" exists ...
	I0312 00:20:57.740791 1188932 host.go:66] Checking if "no-preload-820117" exists ...
	I0312 00:20:57.741249 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:57.741254 1188932 addons.go:69] Setting default-storageclass=true in profile "no-preload-820117"
	I0312 00:20:57.741275 1188932 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820117"
	I0312 00:20:57.741480 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:57.741868 1188932 addons.go:69] Setting metrics-server=true in profile "no-preload-820117"
	I0312 00:20:57.744299 1188932 addons.go:234] Setting addon metrics-server=true in "no-preload-820117"
	W0312 00:20:57.744329 1188932 addons.go:243] addon metrics-server should already be in state true
	I0312 00:20:57.741249 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:57.744211 1188932 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0312 00:20:57.744412 1188932 host.go:66] Checking if "no-preload-820117" exists ...
	I0312 00:20:57.745126 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:57.800871 1188932 addons.go:234] Setting addon default-storageclass=true in "no-preload-820117"
	W0312 00:20:57.800894 1188932 addons.go:243] addon default-storageclass should already be in state true
	I0312 00:20:57.800920 1188932 host.go:66] Checking if "no-preload-820117" exists ...
	I0312 00:20:57.801368 1188932 cli_runner.go:164] Run: docker container inspect no-preload-820117 --format={{.State.Status}}
	I0312 00:20:57.806980 1188932 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0312 00:20:57.811268 1188932 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0312 00:20:57.811296 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0312 00:20:57.811449 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:57.827361 1188932 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0312 00:20:57.833312 1188932 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0312 00:20:57.837008 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0312 00:20:57.837085 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0312 00:20:57.837186 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:57.840420 1188932 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0312 00:20:54.718119 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:56.719178 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:20:57.843116 1188932 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:20:57.843138 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0312 00:20:57.843213 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:57.865763 1188932 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0312 00:20:57.865787 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0312 00:20:57.865879 1188932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820117
	I0312 00:20:57.887088 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:57.889268 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:57.925474 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:57.933711 1188932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34202 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/no-preload-820117/id_rsa Username:docker}
	I0312 00:20:57.964991 1188932 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0312 00:20:58.087728 1188932 node_ready.go:35] waiting up to 6m0s for node "no-preload-820117" to be "Ready" ...
	I0312 00:20:58.156702 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0312 00:20:58.156779 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0312 00:20:58.208389 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0312 00:20:58.208429 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0312 00:20:58.230802 1188932 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0312 00:20:58.230826 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0312 00:20:58.272847 1188932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0312 00:20:58.281485 1188932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0312 00:20:58.336996 1188932 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0312 00:20:58.337070 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0312 00:20:58.338654 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0312 00:20:58.338709 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0312 00:20:58.373149 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0312 00:20:58.373177 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0312 00:20:58.505474 1188932 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:20:58.505503 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0312 00:20:58.694506 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0312 00:20:58.694534 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0312 00:20:58.773111 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0312 00:20:58.773142 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0312 00:20:58.864635 1188932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0312 00:20:58.964010 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0312 00:20:58.964043 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0312 00:20:59.059937 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0312 00:20:59.059971 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0312 00:20:59.193041 1188932 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:20:59.193067 1188932 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0312 00:20:59.251256 1188932 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0312 00:20:59.220911 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:01.719534 1183642 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:03.113667 1188932 node_ready.go:49] node "no-preload-820117" has status "Ready":"True"
	I0312 00:21:03.113691 1188932 node_ready.go:38] duration metric: took 5.025918521s for node "no-preload-820117" to be "Ready" ...
	I0312 00:21:03.113708 1188932 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0312 00:21:03.248632 1188932 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-bd88h" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.263866 1188932 pod_ready.go:92] pod "coredns-76f75df574-bd88h" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.263937 1188932 pod_ready.go:81] duration metric: took 15.217798ms for pod "coredns-76f75df574-bd88h" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.263963 1188932 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.295559 1188932 pod_ready.go:92] pod "etcd-no-preload-820117" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.295580 1188932 pod_ready.go:81] duration metric: took 31.596646ms for pod "etcd-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.295599 1188932 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.333460 1188932 pod_ready.go:92] pod "kube-apiserver-no-preload-820117" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.333494 1188932 pod_ready.go:81] duration metric: took 37.886377ms for pod "kube-apiserver-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.333506 1188932 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.341468 1188932 pod_ready.go:92] pod "kube-controller-manager-no-preload-820117" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.341501 1188932 pod_ready.go:81] duration metric: took 7.987145ms for pod "kube-controller-manager-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.341529 1188932 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p9vcv" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.372669 1188932 pod_ready.go:92] pod "kube-proxy-p9vcv" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.372703 1188932 pod_ready.go:81] duration metric: took 31.159957ms for pod "kube-proxy-p9vcv" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.372731 1188932 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.806738 1188932 pod_ready.go:92] pod "kube-scheduler-no-preload-820117" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.806775 1188932 pod_ready.go:81] duration metric: took 434.030637ms for pod "kube-scheduler-no-preload-820117" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.806787 1188932 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.844472 1188932 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.571568585s)
	I0312 00:21:05.814150 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:06.167343 1188932 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.885820804s)
	I0312 00:21:06.257173 1188932 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.392495941s)
	I0312 00:21:06.257377 1188932 addons.go:470] Verifying addon metrics-server=true in "no-preload-820117"
	I0312 00:21:06.257352 1188932 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.006055397s)
	I0312 00:21:06.259702 1188932 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-820117 addons enable metrics-server
	
	I0312 00:21:06.261821 1188932 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0312 00:21:03.227170 1183642 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.227190 1183642 pod_ready.go:81] duration metric: took 57.016156506s for pod "kube-controller-manager-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.227202 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tvrz6" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.233214 1183642 pod_ready.go:92] pod "kube-proxy-tvrz6" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:03.233236 1183642 pod_ready.go:81] duration metric: took 6.026509ms for pod "kube-proxy-tvrz6" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:03.233252 1183642 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:05.239789 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:07.740120 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:06.263876 1188932 addons.go:505] duration metric: took 8.526856984s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0312 00:21:08.313184 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:10.316553 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:10.240027 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:12.739407 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:12.814806 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:15.313639 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:14.739905 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:17.239412 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:17.814088 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:19.814247 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:19.240306 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:21.741209 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:21.832031 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:24.312961 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:23.741756 1183642 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:25.243129 1183642 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace has status "Ready":"True"
	I0312 00:21:25.243227 1183642 pod_ready.go:81] duration metric: took 22.009965136s for pod "kube-scheduler-old-k8s-version-571339" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:25.243292 1183642 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace to be "Ready" ...
	I0312 00:21:27.249922 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:26.313558 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:28.313787 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:29.253629 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:31.750359 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:30.813610 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:32.813843 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:35.313045 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:34.249496 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:36.250802 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:37.313937 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:39.813372 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:38.250948 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:40.754158 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:41.813995 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:44.313530 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:43.249095 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:45.250392 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:47.749722 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:46.313653 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:48.813597 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:49.749754 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:51.750403 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:50.814351 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:53.313263 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:55.313689 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:54.250385 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:56.770534 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:57.316917 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:59.813635 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:21:59.249206 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:01.258864 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:01.815257 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:04.316221 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:03.749523 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:05.750110 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:06.814865 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:09.313710 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:08.250519 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:10.750288 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:11.813589 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:14.312931 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:13.250136 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:15.256797 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:17.749938 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:16.315604 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:18.813703 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:20.249074 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:22.249939 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:21.313512 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:23.813611 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:24.748800 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:26.749624 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:26.314585 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:28.814287 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:29.253144 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:31.749968 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:31.313261 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:33.812805 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:34.250397 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:36.749482 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:35.813112 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:37.813503 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:40.313595 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:39.249893 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:41.749896 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:42.314090 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:44.813268 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:44.249675 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:46.749855 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:46.814183 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:49.312904 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:49.249664 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:51.249976 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:51.812954 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:53.813494 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:53.250606 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:55.750464 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:55.813824 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:57.815080 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:00.314547 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:22:58.249257 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:00.267356 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:02.750532 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:02.813543 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:05.313532 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:05.250606 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:07.750716 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:07.314229 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:09.813749 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:10.251242 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:12.749463 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:11.813841 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:13.820653 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:15.252504 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:17.749439 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:16.313277 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:18.313547 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:20.313817 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:19.750446 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:22.249233 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:22.813713 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:25.313209 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:24.750135 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:27.249368 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:27.816745 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:30.314045 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:29.751700 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:32.252338 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:32.812864 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:34.813470 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:34.749267 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:36.750479 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:36.813525 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:38.813621 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:39.250553 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:41.750376 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:41.313580 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:43.314016 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:43.750574 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:46.249404 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:45.813789 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:47.813925 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:49.814192 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:48.250459 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:50.259454 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:52.750308 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:52.313527 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:54.314535 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:55.250042 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:57.750235 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:56.813942 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:23:59.313409 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:00.267118 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:02.748886 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:01.313867 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:03.813736 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:04.749475 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:07.250678 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:05.814473 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:08.313478 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:10.313750 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:09.749225 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:11.750512 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:12.313795 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:14.813978 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:14.250329 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:16.749575 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:16.814143 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:19.318164 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:19.250281 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:21.749352 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:21.813539 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:23.813877 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:23.749394 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:25.749579 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:27.750348 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:26.313792 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:28.813350 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:30.249873 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:32.249981 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:30.813918 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:33.314028 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:35.314496 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:34.749339 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:37.249723 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:37.817186 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:40.313608 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:39.249953 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:41.749722 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:42.314608 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:44.314832 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:43.750163 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:46.249711 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:46.813345 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:48.814051 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:48.249848 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:50.250297 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:52.753532 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:51.313790 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:53.814548 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:55.251518 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:57.749656 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:56.313176 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:58.813951 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:24:59.749784 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:01.750995 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:01.314048 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:03.813782 1188932 pod_ready.go:102] pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:03.813813 1188932 pod_ready.go:81] duration metric: took 4m0.007017303s for pod "metrics-server-57f55c9bc5-h8x5t" in "kube-system" namespace to be "Ready" ...
	E0312 00:25:03.813824 1188932 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0312 00:25:03.813832 1188932 pod_ready.go:38] duration metric: took 4m0.700114288s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0312 00:25:03.813846 1188932 api_server.go:52] waiting for apiserver process to appear ...
	I0312 00:25:03.813878 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0312 00:25:03.813955 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0312 00:25:03.877358 1188932 cri.go:89] found id: "5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:03.877377 1188932 cri.go:89] found id: "4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:03.877382 1188932 cri.go:89] found id: ""
	I0312 00:25:03.877389 1188932 logs.go:276] 2 containers: [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e]
	I0312 00:25:03.877445 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.881097 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.884555 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0312 00:25:03.884629 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0312 00:25:03.927396 1188932 cri.go:89] found id: "4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:03.927417 1188932 cri.go:89] found id: "1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:03.927421 1188932 cri.go:89] found id: ""
	I0312 00:25:03.927429 1188932 logs.go:276] 2 containers: [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad]
	I0312 00:25:03.927515 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.931339 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.934844 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0312 00:25:03.934921 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0312 00:25:03.981316 1188932 cri.go:89] found id: "c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:03.981339 1188932 cri.go:89] found id: "84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:03.981348 1188932 cri.go:89] found id: ""
	I0312 00:25:03.981355 1188932 logs.go:276] 2 containers: [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d]
	I0312 00:25:03.981435 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.984979 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:03.988598 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0312 00:25:03.988671 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0312 00:25:04.033496 1188932 cri.go:89] found id: "88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:04.033518 1188932 cri.go:89] found id: "c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:04.033523 1188932 cri.go:89] found id: ""
	I0312 00:25:04.033530 1188932 logs.go:276] 2 containers: [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df]
	I0312 00:25:04.033591 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.037404 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.040992 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0312 00:25:04.041082 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0312 00:25:04.081262 1188932 cri.go:89] found id: "696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:04.081323 1188932 cri.go:89] found id: "98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:04.081341 1188932 cri.go:89] found id: ""
	I0312 00:25:04.081362 1188932 logs.go:276] 2 containers: [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8]
	I0312 00:25:04.081448 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.085159 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.089384 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0312 00:25:04.089505 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0312 00:25:04.132607 1188932 cri.go:89] found id: "819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:04.132628 1188932 cri.go:89] found id: "e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:04.132633 1188932 cri.go:89] found id: ""
	I0312 00:25:04.132640 1188932 logs.go:276] 2 containers: [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3]
	I0312 00:25:04.132721 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.141275 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.144869 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0312 00:25:04.144998 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0312 00:25:04.184133 1188932 cri.go:89] found id: "323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:04.184159 1188932 cri.go:89] found id: "9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:04.184164 1188932 cri.go:89] found id: ""
	I0312 00:25:04.184171 1188932 logs.go:276] 2 containers: [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec]
	I0312 00:25:04.184279 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.187966 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.191734 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0312 00:25:04.191848 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0312 00:25:04.237275 1188932 cri.go:89] found id: "37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:04.237305 1188932 cri.go:89] found id: ""
	I0312 00:25:04.237313 1188932 logs.go:276] 1 containers: [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d]
	I0312 00:25:04.237388 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.240976 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0312 00:25:04.241058 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0312 00:25:04.288105 1188932 cri.go:89] found id: "9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:04.288179 1188932 cri.go:89] found id: "dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:04.288190 1188932 cri.go:89] found id: ""
	I0312 00:25:04.288199 1188932 logs.go:276] 2 containers: [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e]
	I0312 00:25:04.288261 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.292487 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:04.299588 1188932 logs.go:123] Gathering logs for kube-apiserver [4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e] ...
	I0312 00:25:04.299625 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:04.369166 1188932 logs.go:123] Gathering logs for etcd [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984] ...
	I0312 00:25:04.369201 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:04.421140 1188932 logs.go:123] Gathering logs for coredns [84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d] ...
	I0312 00:25:04.421176 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:04.461413 1188932 logs.go:123] Gathering logs for kube-scheduler [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b] ...
	I0312 00:25:04.461449 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:04.505307 1188932 logs.go:123] Gathering logs for kube-scheduler [c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df] ...
	I0312 00:25:04.505339 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:04.556019 1188932 logs.go:123] Gathering logs for kube-proxy [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947] ...
	I0312 00:25:04.556054 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:04.600108 1188932 logs.go:123] Gathering logs for describe nodes ...
	I0312 00:25:04.600139 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0312 00:25:04.742641 1188932 logs.go:123] Gathering logs for kube-apiserver [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8] ...
	I0312 00:25:04.742679 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:04.798684 1188932 logs.go:123] Gathering logs for storage-provisioner [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2] ...
	I0312 00:25:04.798717 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:04.847779 1188932 logs.go:123] Gathering logs for container status ...
	I0312 00:25:04.847810 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0312 00:25:04.931917 1188932 logs.go:123] Gathering logs for kube-proxy [98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8] ...
	I0312 00:25:04.931949 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:04.977858 1188932 logs.go:123] Gathering logs for kindnet [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b] ...
	I0312 00:25:04.977887 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:05.027625 1188932 logs.go:123] Gathering logs for etcd [1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad] ...
	I0312 00:25:05.027658 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:05.088263 1188932 logs.go:123] Gathering logs for kube-controller-manager [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175] ...
	I0312 00:25:05.088303 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:05.160391 1188932 logs.go:123] Gathering logs for storage-provisioner [dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e] ...
	I0312 00:25:05.160427 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:05.218952 1188932 logs.go:123] Gathering logs for kubelet ...
	I0312 00:25:05.218983 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0312 00:25:05.274186 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:05.274423 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:05.301923 1188932 logs.go:123] Gathering logs for kubernetes-dashboard [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d] ...
	I0312 00:25:05.301954 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:05.342290 1188932 logs.go:123] Gathering logs for kube-controller-manager [e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3] ...
	I0312 00:25:05.342318 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:05.432926 1188932 logs.go:123] Gathering logs for kindnet [9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec] ...
	I0312 00:25:05.432961 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:05.472908 1188932 logs.go:123] Gathering logs for containerd ...
	I0312 00:25:05.472935 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0312 00:25:05.535001 1188932 logs.go:123] Gathering logs for dmesg ...
	I0312 00:25:05.535041 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0312 00:25:04.251144 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:06.749259 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:05.556359 1188932 logs.go:123] Gathering logs for coredns [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6] ...
	I0312 00:25:05.556399 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:05.601787 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:05.601814 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0312 00:25:05.601865 1188932 out.go:239] X Problems detected in kubelet:
	W0312 00:25:05.601879 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:05.601887 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:05.601901 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:05.601907 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:25:08.749400 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:10.750185 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:13.249560 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:15.249809 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:17.250310 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:15.603197 1188932 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0312 00:25:15.616802 1188932 api_server.go:72] duration metric: took 4m17.8800599s to wait for apiserver process to appear ...
	I0312 00:25:15.616828 1188932 api_server.go:88] waiting for apiserver healthz status ...
	I0312 00:25:15.616863 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0312 00:25:15.616940 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0312 00:25:15.656555 1188932 cri.go:89] found id: "5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:15.656580 1188932 cri.go:89] found id: "4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:15.656585 1188932 cri.go:89] found id: ""
	I0312 00:25:15.656592 1188932 logs.go:276] 2 containers: [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e]
	I0312 00:25:15.656656 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.660631 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.664102 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0312 00:25:15.664173 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0312 00:25:15.706381 1188932 cri.go:89] found id: "4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:15.706404 1188932 cri.go:89] found id: "1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:15.706409 1188932 cri.go:89] found id: ""
	I0312 00:25:15.706417 1188932 logs.go:276] 2 containers: [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad]
	I0312 00:25:15.706473 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.710574 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.714335 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0312 00:25:15.714407 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0312 00:25:15.760796 1188932 cri.go:89] found id: "c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:15.760821 1188932 cri.go:89] found id: "84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:15.760833 1188932 cri.go:89] found id: ""
	I0312 00:25:15.760842 1188932 logs.go:276] 2 containers: [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d]
	I0312 00:25:15.760900 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.764634 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.768257 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0312 00:25:15.768338 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0312 00:25:15.815803 1188932 cri.go:89] found id: "88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:15.815826 1188932 cri.go:89] found id: "c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:15.815832 1188932 cri.go:89] found id: ""
	I0312 00:25:15.815839 1188932 logs.go:276] 2 containers: [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df]
	I0312 00:25:15.815896 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.821347 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.825458 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0312 00:25:15.825538 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0312 00:25:15.874737 1188932 cri.go:89] found id: "696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:15.874761 1188932 cri.go:89] found id: "98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:15.874765 1188932 cri.go:89] found id: ""
	I0312 00:25:15.874773 1188932 logs.go:276] 2 containers: [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8]
	I0312 00:25:15.874828 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.878670 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.882276 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0312 00:25:15.882355 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0312 00:25:15.926561 1188932 cri.go:89] found id: "819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:15.926581 1188932 cri.go:89] found id: "e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:15.926585 1188932 cri.go:89] found id: ""
	I0312 00:25:15.926592 1188932 logs.go:276] 2 containers: [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3]
	I0312 00:25:15.926649 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.930718 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.934373 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0312 00:25:15.934484 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0312 00:25:15.979063 1188932 cri.go:89] found id: "323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:15.979086 1188932 cri.go:89] found id: "9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:15.979091 1188932 cri.go:89] found id: ""
	I0312 00:25:15.979098 1188932 logs.go:276] 2 containers: [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec]
	I0312 00:25:15.979156 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.982802 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:15.986260 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0312 00:25:15.986344 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0312 00:25:16.034671 1188932 cri.go:89] found id: "37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:16.034695 1188932 cri.go:89] found id: ""
	I0312 00:25:16.034704 1188932 logs.go:276] 1 containers: [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d]
	I0312 00:25:16.034765 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:16.040676 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0312 00:25:16.040776 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0312 00:25:16.089561 1188932 cri.go:89] found id: "9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:16.089583 1188932 cri.go:89] found id: "dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:16.089587 1188932 cri.go:89] found id: ""
	I0312 00:25:16.089595 1188932 logs.go:276] 2 containers: [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e]
	I0312 00:25:16.089660 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:16.093570 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:16.097242 1188932 logs.go:123] Gathering logs for kube-proxy [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947] ...
	I0312 00:25:16.097269 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:16.137056 1188932 logs.go:123] Gathering logs for kube-controller-manager [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175] ...
	I0312 00:25:16.137135 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:16.213494 1188932 logs.go:123] Gathering logs for container status ...
	I0312 00:25:16.213574 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0312 00:25:16.292189 1188932 logs.go:123] Gathering logs for kube-apiserver [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8] ...
	I0312 00:25:16.292373 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:16.366443 1188932 logs.go:123] Gathering logs for etcd [1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad] ...
	I0312 00:25:16.366523 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:16.424676 1188932 logs.go:123] Gathering logs for kube-scheduler [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b] ...
	I0312 00:25:16.424709 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:16.471886 1188932 logs.go:123] Gathering logs for coredns [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6] ...
	I0312 00:25:16.471918 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:16.516126 1188932 logs.go:123] Gathering logs for kube-scheduler [c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df] ...
	I0312 00:25:16.516159 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:16.565627 1188932 logs.go:123] Gathering logs for kube-controller-manager [e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3] ...
	I0312 00:25:16.565660 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:16.642532 1188932 logs.go:123] Gathering logs for storage-provisioner [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2] ...
	I0312 00:25:16.642566 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:16.690549 1188932 logs.go:123] Gathering logs for dmesg ...
	I0312 00:25:16.690582 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0312 00:25:16.719869 1188932 logs.go:123] Gathering logs for describe nodes ...
	I0312 00:25:16.719903 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0312 00:25:16.883756 1188932 logs.go:123] Gathering logs for etcd [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984] ...
	I0312 00:25:16.883795 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:16.940774 1188932 logs.go:123] Gathering logs for storage-provisioner [dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e] ...
	I0312 00:25:16.940806 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:16.982622 1188932 logs.go:123] Gathering logs for kube-apiserver [4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e] ...
	I0312 00:25:16.982652 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:17.065545 1188932 logs.go:123] Gathering logs for kube-proxy [98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8] ...
	I0312 00:25:17.065578 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:17.115498 1188932 logs.go:123] Gathering logs for kindnet [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b] ...
	I0312 00:25:17.115530 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:17.157056 1188932 logs.go:123] Gathering logs for kubernetes-dashboard [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d] ...
	I0312 00:25:17.157094 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:17.201427 1188932 logs.go:123] Gathering logs for containerd ...
	I0312 00:25:17.201455 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0312 00:25:17.262909 1188932 logs.go:123] Gathering logs for kubelet ...
	I0312 00:25:17.262944 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0312 00:25:17.310234 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:17.310492 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:17.338504 1188932 logs.go:123] Gathering logs for coredns [84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d] ...
	I0312 00:25:17.338537 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:17.381814 1188932 logs.go:123] Gathering logs for kindnet [9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec] ...
	I0312 00:25:17.381895 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:17.422599 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:17.422628 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0312 00:25:17.422702 1188932 out.go:239] X Problems detected in kubelet:
	W0312 00:25:17.422718 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:17.422728 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:17.422919 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:17.422927 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:25:19.749835 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:21.749927 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:23.750869 1183642 pod_ready.go:102] pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace has status "Ready":"False"
	I0312 00:25:25.249870 1183642 pod_ready.go:81] duration metric: took 4m0.006519105s for pod "metrics-server-9975d5f86-c87xf" in "kube-system" namespace to be "Ready" ...
	E0312 00:25:25.249898 1183642 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0312 00:25:25.249907 1183642 pod_ready.go:38] duration metric: took 5m28.619543264s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0312 00:25:25.256852 1183642 api_server.go:52] waiting for apiserver process to appear ...
	I0312 00:25:25.256936 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0312 00:25:25.257019 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0312 00:25:25.300453 1183642 cri.go:89] found id: "e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372"
	I0312 00:25:25.300515 1183642 cri.go:89] found id: "022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5"
	I0312 00:25:25.300533 1183642 cri.go:89] found id: ""
	I0312 00:25:25.300551 1183642 logs.go:276] 2 containers: [e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372 022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5]
	I0312 00:25:25.300638 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.304205 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.307725 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0312 00:25:25.307813 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0312 00:25:25.344547 1183642 cri.go:89] found id: "227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086"
	I0312 00:25:25.344570 1183642 cri.go:89] found id: "00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9"
	I0312 00:25:25.344576 1183642 cri.go:89] found id: ""
	I0312 00:25:25.344583 1183642 logs.go:276] 2 containers: [227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086 00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9]
	I0312 00:25:25.344658 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.348234 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.351686 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0312 00:25:25.351805 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0312 00:25:25.393110 1183642 cri.go:89] found id: "0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097"
	I0312 00:25:25.393134 1183642 cri.go:89] found id: "91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963"
	I0312 00:25:25.393139 1183642 cri.go:89] found id: ""
	I0312 00:25:25.393146 1183642 logs.go:276] 2 containers: [0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097 91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963]
	I0312 00:25:25.393203 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.396999 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.400568 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0312 00:25:25.400686 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0312 00:25:25.439058 1183642 cri.go:89] found id: "cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9"
	I0312 00:25:25.439084 1183642 cri.go:89] found id: "c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f"
	I0312 00:25:25.439089 1183642 cri.go:89] found id: ""
	I0312 00:25:25.439096 1183642 logs.go:276] 2 containers: [cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9 c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f]
	I0312 00:25:25.439197 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.443086 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.446900 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0312 00:25:25.446974 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0312 00:25:25.488093 1183642 cri.go:89] found id: "46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e"
	I0312 00:25:25.488164 1183642 cri.go:89] found id: "8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4"
	I0312 00:25:25.488181 1183642 cri.go:89] found id: ""
	I0312 00:25:25.488196 1183642 logs.go:276] 2 containers: [46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e 8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4]
	I0312 00:25:25.488286 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.492007 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.495477 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0312 00:25:25.495557 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0312 00:25:25.536694 1183642 cri.go:89] found id: "ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7"
	I0312 00:25:25.536759 1183642 cri.go:89] found id: "ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc"
	I0312 00:25:25.536771 1183642 cri.go:89] found id: ""
	I0312 00:25:25.536779 1183642 logs.go:276] 2 containers: [ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7 ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc]
	I0312 00:25:25.536850 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.540733 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.544497 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0312 00:25:25.544627 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0312 00:25:25.581655 1183642 cri.go:89] found id: "ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0"
	I0312 00:25:25.581679 1183642 cri.go:89] found id: "1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6"
	I0312 00:25:25.581684 1183642 cri.go:89] found id: ""
	I0312 00:25:25.581705 1183642 logs.go:276] 2 containers: [ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0 1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6]
	I0312 00:25:25.581770 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.585569 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.589471 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0312 00:25:25.589550 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0312 00:25:25.630552 1183642 cri.go:89] found id: "98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879"
	I0312 00:25:25.630576 1183642 cri.go:89] found id: "0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86"
	I0312 00:25:25.630581 1183642 cri.go:89] found id: ""
	I0312 00:25:25.630589 1183642 logs.go:276] 2 containers: [98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879 0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86]
	I0312 00:25:25.630648 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.634419 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.637992 1183642 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0312 00:25:25.638081 1183642 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0312 00:25:25.684057 1183642 cri.go:89] found id: "cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec"
	I0312 00:25:25.684120 1183642 cri.go:89] found id: ""
	I0312 00:25:25.684132 1183642 logs.go:276] 1 containers: [cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec]
	I0312 00:25:25.684207 1183642 ssh_runner.go:195] Run: which crictl
	I0312 00:25:25.688044 1183642 logs.go:123] Gathering logs for kube-apiserver [e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372] ...
	I0312 00:25:25.688071 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372"
	I0312 00:25:25.746459 1183642 logs.go:123] Gathering logs for kube-apiserver [022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5] ...
	I0312 00:25:25.746492 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5"
	I0312 00:25:25.807669 1183642 logs.go:123] Gathering logs for coredns [0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097] ...
	I0312 00:25:25.807705 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097"
	I0312 00:25:25.858063 1183642 logs.go:123] Gathering logs for kube-scheduler [cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9] ...
	I0312 00:25:25.858091 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9"
	I0312 00:25:25.900479 1183642 logs.go:123] Gathering logs for kube-proxy [8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4] ...
	I0312 00:25:25.900511 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4"
	I0312 00:25:25.946460 1183642 logs.go:123] Gathering logs for kindnet [1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6] ...
	I0312 00:25:25.946491 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6"
	I0312 00:25:25.992941 1183642 logs.go:123] Gathering logs for kubernetes-dashboard [cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec] ...
	I0312 00:25:25.992975 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec"
	I0312 00:25:26.047245 1183642 logs.go:123] Gathering logs for describe nodes ...
	I0312 00:25:26.047275 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0312 00:25:26.245179 1183642 logs.go:123] Gathering logs for coredns [91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963] ...
	I0312 00:25:26.245208 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963"
	I0312 00:25:26.286685 1183642 logs.go:123] Gathering logs for kube-proxy [46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e] ...
	I0312 00:25:26.286717 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e"
	I0312 00:25:26.326586 1183642 logs.go:123] Gathering logs for kindnet [ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0] ...
	I0312 00:25:26.326619 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0"
	I0312 00:25:26.371019 1183642 logs.go:123] Gathering logs for storage-provisioner [0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86] ...
	I0312 00:25:26.371048 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86"
	I0312 00:25:26.437329 1183642 logs.go:123] Gathering logs for etcd [227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086] ...
	I0312 00:25:26.437363 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086"
	I0312 00:25:26.491828 1183642 logs.go:123] Gathering logs for kube-scheduler [c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f] ...
	I0312 00:25:26.491858 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f"
	I0312 00:25:26.534467 1183642 logs.go:123] Gathering logs for storage-provisioner [98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879] ...
	I0312 00:25:26.534502 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879"
	I0312 00:25:26.573116 1183642 logs.go:123] Gathering logs for containerd ...
	I0312 00:25:26.573190 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0312 00:25:26.637555 1183642 logs.go:123] Gathering logs for container status ...
	I0312 00:25:26.637593 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0312 00:25:26.690245 1183642 logs.go:123] Gathering logs for kubelet ...
	I0312 00:25:26.690283 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0312 00:25:26.757014 1183642 logs.go:138] Found kubelet problem: Mar 12 00:19:58 old-k8s-version-571339 kubelet[663]: E0312 00:19:58.352250     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.757217 1183642 logs.go:138] Found kubelet problem: Mar 12 00:19:58 old-k8s-version-571339 kubelet[663]: E0312 00:19:58.560744     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.760035 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:13 old-k8s-version-571339 kubelet[663]: E0312 00:20:13.145527     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.760786 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:15 old-k8s-version-571339 kubelet[663]: E0312 00:20:15.137936     663 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-5nnmb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-5nnmb" is forbidden: User "system:node:old-k8s-version-571339" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-571339' and this object
	W0312 00:25:26.764232 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:27 old-k8s-version-571339 kubelet[663]: E0312 00:20:27.691697     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.764435 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:28 old-k8s-version-571339 kubelet[663]: E0312 00:20:28.142650     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.764781 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:28 old-k8s-version-571339 kubelet[663]: E0312 00:20:28.687196     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.765236 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:29 old-k8s-version-571339 kubelet[663]: E0312 00:20:29.694628     663 pod_workers.go:191] Error syncing pod c73ffc75-b4a0-4184-80c3-a73e21cc954e ("storage-provisioner_kube-system(c73ffc75-b4a0-4184-80c3-a73e21cc954e)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c73ffc75-b4a0-4184-80c3-a73e21cc954e)"
	W0312 00:25:26.765570 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:30 old-k8s-version-571339 kubelet[663]: E0312 00:20:30.101093     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.766572 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:42 old-k8s-version-571339 kubelet[663]: E0312 00:20:42.735991     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.769242 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:43 old-k8s-version-571339 kubelet[663]: E0312 00:20:43.152194     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.769586 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:50 old-k8s-version-571339 kubelet[663]: E0312 00:20:50.100788     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.769781 1183642 logs.go:138] Found kubelet problem: Mar 12 00:20:54 old-k8s-version-571339 kubelet[663]: E0312 00:20:54.136340     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.770396 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:03 old-k8s-version-571339 kubelet[663]: E0312 00:21:03.794154     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.770580 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:07 old-k8s-version-571339 kubelet[663]: E0312 00:21:07.135168     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.770905 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:10 old-k8s-version-571339 kubelet[663]: E0312 00:21:10.101222     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.771102 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:22 old-k8s-version-571339 kubelet[663]: E0312 00:21:22.136421     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.771444 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:25 old-k8s-version-571339 kubelet[663]: E0312 00:21:25.135960     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.773913 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:33 old-k8s-version-571339 kubelet[663]: E0312 00:21:33.143768     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.774266 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:40 old-k8s-version-571339 kubelet[663]: E0312 00:21:40.134926     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.774456 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:45 old-k8s-version-571339 kubelet[663]: E0312 00:21:45.143152     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.775048 1183642 logs.go:138] Found kubelet problem: Mar 12 00:21:51 old-k8s-version-571339 kubelet[663]: E0312 00:21:51.956550     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.775388 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:00 old-k8s-version-571339 kubelet[663]: E0312 00:22:00.104419     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.775575 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:00 old-k8s-version-571339 kubelet[663]: E0312 00:22:00.145154     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.775901 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:15 old-k8s-version-571339 kubelet[663]: E0312 00:22:15.136079     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.776087 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:15 old-k8s-version-571339 kubelet[663]: E0312 00:22:15.136820     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.776271 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:28 old-k8s-version-571339 kubelet[663]: E0312 00:22:28.135086     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.776603 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:30 old-k8s-version-571339 kubelet[663]: E0312 00:22:30.140098     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.776928 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:42 old-k8s-version-571339 kubelet[663]: E0312 00:22:42.135063     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.777116 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:43 old-k8s-version-571339 kubelet[663]: E0312 00:22:43.135158     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.779571 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:54 old-k8s-version-571339 kubelet[663]: E0312 00:22:54.143460     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0312 00:25:26.779896 1183642 logs.go:138] Found kubelet problem: Mar 12 00:22:56 old-k8s-version-571339 kubelet[663]: E0312 00:22:56.134757     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.780081 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:06 old-k8s-version-571339 kubelet[663]: E0312 00:23:06.135565     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.780408 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:07 old-k8s-version-571339 kubelet[663]: E0312 00:23:07.134751     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.780994 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:19 old-k8s-version-571339 kubelet[663]: E0312 00:23:19.156710     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.781319 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:20 old-k8s-version-571339 kubelet[663]: E0312 00:23:20.160925     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.781505 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:21 old-k8s-version-571339 kubelet[663]: E0312 00:23:21.135408     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.781832 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:32 old-k8s-version-571339 kubelet[663]: E0312 00:23:32.135536     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.782019 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:35 old-k8s-version-571339 kubelet[663]: E0312 00:23:35.135123     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.782353 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:45 old-k8s-version-571339 kubelet[663]: E0312 00:23:45.134915     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.782536 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:47 old-k8s-version-571339 kubelet[663]: E0312 00:23:47.135061     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.782860 1183642 logs.go:138] Found kubelet problem: Mar 12 00:23:59 old-k8s-version-571339 kubelet[663]: E0312 00:23:59.134729     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.783047 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:02 old-k8s-version-571339 kubelet[663]: E0312 00:24:02.135523     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.783378 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:11 old-k8s-version-571339 kubelet[663]: E0312 00:24:11.134786     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.783568 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:16 old-k8s-version-571339 kubelet[663]: E0312 00:24:16.135140     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.783898 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:22 old-k8s-version-571339 kubelet[663]: E0312 00:24:22.135430     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.784081 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:31 old-k8s-version-571339 kubelet[663]: E0312 00:24:31.135197     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.784405 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:35 old-k8s-version-571339 kubelet[663]: E0312 00:24:35.134852     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.784588 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:43 old-k8s-version-571339 kubelet[663]: E0312 00:24:43.135594     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.784918 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:47 old-k8s-version-571339 kubelet[663]: E0312 00:24:47.134779     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.785104 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:56 old-k8s-version-571339 kubelet[663]: E0312 00:24:56.137730     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.785429 1183642 logs.go:138] Found kubelet problem: Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.785615 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.785969 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:26.786154 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:26.786482 1183642 logs.go:138] Found kubelet problem: Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	I0312 00:25:26.786493 1183642 logs.go:123] Gathering logs for dmesg ...
	I0312 00:25:26.786508 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0312 00:25:26.806149 1183642 logs.go:123] Gathering logs for etcd [00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9] ...
	I0312 00:25:26.806190 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9"
	I0312 00:25:26.874399 1183642 logs.go:123] Gathering logs for kube-controller-manager [ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7] ...
	I0312 00:25:26.874432 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7"
	I0312 00:25:26.948646 1183642 logs.go:123] Gathering logs for kube-controller-manager [ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc] ...
	I0312 00:25:26.948680 1183642 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc"
	I0312 00:25:27.046898 1183642 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:27.046932 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0312 00:25:27.047003 1183642 out.go:239] X Problems detected in kubelet:
	W0312 00:25:27.047015 1183642 out.go:239]   Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:27.047022 1183642 out.go:239]   Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:27.047037 1183642 out.go:239]   Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	W0312 00:25:27.047048 1183642 out.go:239]   Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0312 00:25:27.047056 1183642 out.go:239]   Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	I0312 00:25:27.047062 1183642 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:27.047068 1183642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:25:27.424002 1188932 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0312 00:25:27.431924 1188932 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0312 00:25:27.433124 1188932 api_server.go:141] control plane version: v1.29.0-rc.2
	I0312 00:25:27.433160 1188932 api_server.go:131] duration metric: took 11.816324942s to wait for apiserver health ...
	I0312 00:25:27.433168 1188932 system_pods.go:43] waiting for kube-system pods to appear ...
	I0312 00:25:27.433189 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0312 00:25:27.433252 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0312 00:25:27.483202 1188932 cri.go:89] found id: "5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:27.483222 1188932 cri.go:89] found id: "4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:27.483227 1188932 cri.go:89] found id: ""
	I0312 00:25:27.483234 1188932 logs.go:276] 2 containers: [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e]
	I0312 00:25:27.483294 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.487015 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.490679 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0312 00:25:27.490761 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0312 00:25:27.531815 1188932 cri.go:89] found id: "4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:27.531838 1188932 cri.go:89] found id: "1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:27.531843 1188932 cri.go:89] found id: ""
	I0312 00:25:27.531867 1188932 logs.go:276] 2 containers: [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad]
	I0312 00:25:27.531937 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.535875 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.539681 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0312 00:25:27.539762 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0312 00:25:27.590701 1188932 cri.go:89] found id: "c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:27.590724 1188932 cri.go:89] found id: "84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:27.590729 1188932 cri.go:89] found id: ""
	I0312 00:25:27.590736 1188932 logs.go:276] 2 containers: [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d]
	I0312 00:25:27.590821 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.594960 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.598873 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0312 00:25:27.598981 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0312 00:25:27.639416 1188932 cri.go:89] found id: "88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:27.639441 1188932 cri.go:89] found id: "c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:27.639445 1188932 cri.go:89] found id: ""
	I0312 00:25:27.639452 1188932 logs.go:276] 2 containers: [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df]
	I0312 00:25:27.639524 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.643051 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.646656 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0312 00:25:27.646730 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0312 00:25:27.685204 1188932 cri.go:89] found id: "696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:27.685226 1188932 cri.go:89] found id: "98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:27.685231 1188932 cri.go:89] found id: ""
	I0312 00:25:27.685238 1188932 logs.go:276] 2 containers: [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8]
	I0312 00:25:27.685296 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.688926 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.692613 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0312 00:25:27.692685 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0312 00:25:27.732789 1188932 cri.go:89] found id: "819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:27.732822 1188932 cri.go:89] found id: "e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:27.732828 1188932 cri.go:89] found id: ""
	I0312 00:25:27.732837 1188932 logs.go:276] 2 containers: [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3]
	I0312 00:25:27.732936 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.736608 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.740158 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0312 00:25:27.740238 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0312 00:25:27.779515 1188932 cri.go:89] found id: "323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:27.779536 1188932 cri.go:89] found id: "9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:27.779541 1188932 cri.go:89] found id: ""
	I0312 00:25:27.779549 1188932 logs.go:276] 2 containers: [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec]
	I0312 00:25:27.779605 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.783235 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.786785 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0312 00:25:27.786892 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0312 00:25:27.829835 1188932 cri.go:89] found id: "9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:27.829947 1188932 cri.go:89] found id: "dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:27.829958 1188932 cri.go:89] found id: ""
	I0312 00:25:27.829970 1188932 logs.go:276] 2 containers: [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e]
	I0312 00:25:27.830064 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.835410 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.840308 1188932 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0312 00:25:27.840406 1188932 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0312 00:25:27.884411 1188932 cri.go:89] found id: "37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:27.884447 1188932 cri.go:89] found id: ""
	I0312 00:25:27.884455 1188932 logs.go:276] 1 containers: [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d]
	I0312 00:25:27.884515 1188932 ssh_runner.go:195] Run: which crictl
	I0312 00:25:27.888405 1188932 logs.go:123] Gathering logs for storage-provisioner [dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e] ...
	I0312 00:25:27.888474 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dec1c27378bb4ef4460888bcfb40ca5dfccc459968492dd3e38b75ac421b0e8e"
	I0312 00:25:27.930569 1188932 logs.go:123] Gathering logs for kubernetes-dashboard [37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d] ...
	I0312 00:25:27.930601 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37d755023a893e06a00cdc14d2dc52b4f5c42a585c7ee881d7d3e29d292b126d"
	I0312 00:25:27.975720 1188932 logs.go:123] Gathering logs for kube-apiserver [5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8] ...
	I0312 00:25:27.975752 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5bba79d28e804260c46d7e8af4c7322a6e22b8739a7b65e4f7be5d64306a2ec8"
	I0312 00:25:28.055686 1188932 logs.go:123] Gathering logs for kube-proxy [696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947] ...
	I0312 00:25:28.055726 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 696c39e44fc846bdc283e21d2e90ef3b55024f28122476a01c185722052ac947"
	I0312 00:25:28.105164 1188932 logs.go:123] Gathering logs for kube-proxy [98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8] ...
	I0312 00:25:28.105194 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ff369cade299c3b79c03bdcf3fa19fb46b634e98833fe1a4dacb2e30baa2c8"
	I0312 00:25:28.157314 1188932 logs.go:123] Gathering logs for kindnet [323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b] ...
	I0312 00:25:28.157343 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 323bfd261df2aa6bdcaadc95d9b56bc2270a431303c0f1b4ecf16c54f87d6c7b"
	I0312 00:25:28.209856 1188932 logs.go:123] Gathering logs for kindnet [9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec] ...
	I0312 00:25:28.209886 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d99f44eeab838fb07f421ae4950e661e5f56229d69d698d8f9d2033522367ec"
	I0312 00:25:28.280971 1188932 logs.go:123] Gathering logs for storage-provisioner [9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2] ...
	I0312 00:25:28.281000 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9368f694d66d01b464cc56ea1fd680f25acfb86767c32c49af31dc0da67bacb2"
	I0312 00:25:28.326384 1188932 logs.go:123] Gathering logs for describe nodes ...
	I0312 00:25:28.326415 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0312 00:25:28.470175 1188932 logs.go:123] Gathering logs for kube-scheduler [88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b] ...
	I0312 00:25:28.470210 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88c55a6c60bb439bcdf99b25641fcd058c1a45d0e83950575189d082ee25643b"
	I0312 00:25:28.530734 1188932 logs.go:123] Gathering logs for kube-controller-manager [819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175] ...
	I0312 00:25:28.530768 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 819dcb8ce3bf495d395f8bfd175489a574b14f4d7ec3abfc8d1da331f9f75175"
	I0312 00:25:28.612208 1188932 logs.go:123] Gathering logs for kube-controller-manager [e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3] ...
	I0312 00:25:28.612240 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37d0739e1cf5a034f8279d0f1c5a380bdf13f8df12805914af49b8d09457cc3"
	I0312 00:25:28.685567 1188932 logs.go:123] Gathering logs for container status ...
	I0312 00:25:28.685602 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0312 00:25:28.762445 1188932 logs.go:123] Gathering logs for etcd [1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad] ...
	I0312 00:25:28.762475 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1433ae512c5cae729926e9a205666b562fb23efcf8bdc4b06b97fb200275abad"
	I0312 00:25:28.828318 1188932 logs.go:123] Gathering logs for kube-scheduler [c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df] ...
	I0312 00:25:28.828349 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8529a2810c2bb6eb70f0f41e1933487322a4e0712a39e177213f3244173c3df"
	I0312 00:25:28.889240 1188932 logs.go:123] Gathering logs for kube-apiserver [4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e] ...
	I0312 00:25:28.889274 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e2109d4bafae2dd679c7ef05f522399401b1a72fb57a580332be0348557938e"
	I0312 00:25:28.954312 1188932 logs.go:123] Gathering logs for etcd [4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984] ...
	I0312 00:25:28.954346 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c03779c654a410263bf652c7f2d430f984d91760e30f9510e4952a4de615984"
	I0312 00:25:29.021788 1188932 logs.go:123] Gathering logs for coredns [c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6] ...
	I0312 00:25:29.021819 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c2acdec1d054a8ac4872c3f65232c70b6c961ec0aba27f0912710c095efd38b6"
	I0312 00:25:29.069054 1188932 logs.go:123] Gathering logs for coredns [84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d] ...
	I0312 00:25:29.069083 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84b4bce6633a2747538b2b0094a5cdbd7dbdcc49272243631213feef4909f91d"
	I0312 00:25:29.116014 1188932 logs.go:123] Gathering logs for containerd ...
	I0312 00:25:29.116044 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0312 00:25:29.188012 1188932 logs.go:123] Gathering logs for kubelet ...
	I0312 00:25:29.188051 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0312 00:25:29.238053 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:29.238324 1188932 logs.go:138] Found kubelet problem: Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:29.267755 1188932 logs.go:123] Gathering logs for dmesg ...
	I0312 00:25:29.267800 1188932 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0312 00:25:29.287297 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:29.287426 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0312 00:25:29.287505 1188932 out.go:239] X Problems detected in kubelet:
	W0312 00:25:29.287517 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: W0312 00:21:16.198194     656 reflector.go:539] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	W0312 00:25:29.287539 1188932 out.go:239]   Mar 12 00:21:16 no-preload-820117 kubelet[656]: E0312 00:21:16.198236     656 reflector.go:147] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-820117" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-820117' and this object
	I0312 00:25:29.287554 1188932 out.go:304] Setting ErrFile to fd 2...
	I0312 00:25:29.287570 1188932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:25:37.048428 1183642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0312 00:25:37.061991 1183642 api_server.go:72] duration metric: took 6m0.915958249s to wait for apiserver process to appear ...
	I0312 00:25:37.062023 1183642 api_server.go:88] waiting for apiserver healthz status ...
	I0312 00:25:37.064575 1183642 out.go:177] 
	W0312 00:25:37.066356 1183642 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0312 00:25:37.066381 1183642 out.go:239] * 
	W0312 00:25:37.067988 1183642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0312 00:25:37.069878 1183642 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	d96f9b6b28f75       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   4c072bfdf413c       dashboard-metrics-scraper-8d5bb5db8-5w7h2
	98a1386a1a083       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   edb00b61a3759       storage-provisioner
	cb775d92b00e5       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   8cab24836afeb       kubernetes-dashboard-cd95d586-7dtbw
	de7fe1bbfd940       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   2f92a28ed6f88       busybox
	46d69c632200b       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   3d1afaa7fdbea       kube-proxy-tvrz6
	0c3514f843ab8       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   edb00b61a3759       storage-provisioner
	0d3039260ff7a       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   c0d5814aae44a       coredns-74ff55c5b-pd7cs
	ae84159c4657e       4740c1948d3fc       5 minutes ago       Running             kindnet-cni                 1                   41f7a3d542fb0       kindnet-rjqrt
	cda4f1be508f3       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   cc1db8d1f9f60       kube-scheduler-old-k8s-version-571339
	e90658574cccc       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   5bd08f59387bf       kube-apiserver-old-k8s-version-571339
	ea472223f1505       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   a88a84a19ae84       kube-controller-manager-old-k8s-version-571339
	227b5f4c3ec0b       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   b3a3020aa402e       etcd-old-k8s-version-571339
	7bb137f0f54c1       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   3878b8d8c4e8f       busybox
	91ba0f1087505       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   d94680007b27a       coredns-74ff55c5b-pd7cs
	1a5414516a6e0       4740c1948d3fc       8 minutes ago       Exited              kindnet-cni                 0                   3398bdba178bc       kindnet-rjqrt
	8abc9a2fec8f5       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   a42a1e56024d9       kube-proxy-tvrz6
	00fea42a626bc       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   0b7b672a1a34d       etcd-old-k8s-version-571339
	c3f64500c09ef       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   3558549ec2755       kube-scheduler-old-k8s-version-571339
	ac83611721f7a       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   98b40a199c49e       kube-controller-manager-old-k8s-version-571339
	022154a50546e       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   a81f0f0ec4767       kube-apiserver-old-k8s-version-571339
	
	
	==> containerd <==
	Mar 12 00:21:33 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:33.141243057Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 12 00:21:33 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:33.143100368Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.136588303Z" level=info msg="CreateContainer within sandbox \"4c072bfdf413cc4e7943a809c1a7f5a31a9cf7e9c35adc7348fcaac6419ca06a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.156308787Z" level=info msg="CreateContainer within sandbox \"4c072bfdf413cc4e7943a809c1a7f5a31a9cf7e9c35adc7348fcaac6419ca06a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09\""
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.158796587Z" level=info msg="StartContainer for \"6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09\""
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.239130258Z" level=info msg="StartContainer for \"6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09\" returns successfully"
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.268108785Z" level=info msg="shim disconnected" id=6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.268174572Z" level=warning msg="cleaning up after shim disconnected" id=6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09 namespace=k8s.io
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.268187142Z" level=info msg="cleaning up dead shim"
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.277349712Z" level=warning msg="cleanup warnings time=\"2024-03-12T00:21:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2937 runtime=io.containerd.runc.v2\n"
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.961980310Z" level=info msg="RemoveContainer for \"349a0a6fa1ecc529c3bf38cbf5f38050c10fd75de25e559a5ac6f114e43cfe53\""
	Mar 12 00:21:51 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:21:51.967205651Z" level=info msg="RemoveContainer for \"349a0a6fa1ecc529c3bf38cbf5f38050c10fd75de25e559a5ac6f114e43cfe53\" returns successfully"
	Mar 12 00:22:54 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:22:54.136434236Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:22:54 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:22:54.141105016Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Mar 12 00:22:54 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:22:54.142890387Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.138235110Z" level=info msg="CreateContainer within sandbox \"4c072bfdf413cc4e7943a809c1a7f5a31a9cf7e9c35adc7348fcaac6419ca06a\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.153904020Z" level=info msg="CreateContainer within sandbox \"4c072bfdf413cc4e7943a809c1a7f5a31a9cf7e9c35adc7348fcaac6419ca06a\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7\""
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.154755098Z" level=info msg="StartContainer for \"d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7\""
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.215771808Z" level=info msg="StartContainer for \"d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7\" returns successfully"
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.241899984Z" level=info msg="shim disconnected" id=d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.241971728Z" level=warning msg="cleaning up after shim disconnected" id=d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7 namespace=k8s.io
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.241984232Z" level=info msg="cleaning up dead shim"
	Mar 12 00:23:18 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:18.255813095Z" level=warning msg="cleanup warnings time=\"2024-03-12T00:23:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3193 runtime=io.containerd.runc.v2\n"
	Mar 12 00:23:19 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:19.162394641Z" level=info msg="RemoveContainer for \"6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09\""
	Mar 12 00:23:19 old-k8s-version-571339 containerd[567]: time="2024-03-12T00:23:19.167991910Z" level=info msg="RemoveContainer for \"6c6a6c77b06d378178da0ea3b51f9fd2adc77daafe8f04ab0c47fc47771dcd09\" returns successfully"
	
	
	==> coredns [0d3039260ff7a1d3154eac6a37f5460535e860a77d3473b023822022e245e097] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58748 - 36917 "HINFO IN 4395544677134104827.6798169659976917837. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.051213619s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0312 00:20:28.835220       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-12 00:19:58.834469372 +0000 UTC m=+0.115253210) (total time: 30.000594188s):
	Trace[2019727887]: [30.000594188s] [30.000594188s] END
	E0312 00:20:28.835294       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0312 00:20:28.837562       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-12 00:19:58.835971764 +0000 UTC m=+0.116755610) (total time: 30.001536438s):
	Trace[939984059]: [30.001536438s] [30.001536438s] END
	E0312 00:20:28.837582       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0312 00:20:28.842345       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-12 00:19:58.829244014 +0000 UTC m=+0.110027860) (total time: 30.013077505s):
	Trace[1474941318]: [30.013077505s] [30.013077505s] END
	E0312 00:20:28.842365       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [91ba0f1087505b74193c749143407b360ca52adeb6c8e6fed4c64111ff7ac963] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:46841 - 10479 "HINFO IN 3423868519258051186.756692252847698553. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.036606604s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-571339
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-571339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e0a9a803bb8418ee87aee3b4880090eb65379520
	                    minikube.k8s.io/name=old-k8s-version-571339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_12T00_17_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Mar 2024 00:17:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-571339
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Mar 2024 00:25:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Mar 2024 00:20:47 +0000   Tue, 12 Mar 2024 00:17:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Mar 2024 00:20:47 +0000   Tue, 12 Mar 2024 00:17:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Mar 2024 00:20:47 +0000   Tue, 12 Mar 2024 00:17:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Mar 2024 00:20:47 +0000   Tue, 12 Mar 2024 00:17:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-571339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 db945c8abecb4b39a647c266f9524661
	  System UUID:                b4f4bdad-4162-4003-b09b-7ab7462500c7
	  Boot ID:                    8c314cab-fe64-4f72-b005-d9231ff3e4e9
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.28
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 coredns-74ff55c5b-pd7cs                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m8s
	  kube-system                 etcd-old-k8s-version-571339                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m16s
	  kube-system                 kindnet-rjqrt                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m8s
	  kube-system                 kube-apiserver-old-k8s-version-571339             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-controller-manager-old-k8s-version-571339    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-proxy-tvrz6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m8s
	  kube-system                 kube-scheduler-old-k8s-version-571339             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 metrics-server-9975d5f86-c87xf                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m7s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-5w7h2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-7dtbw               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m36s (x5 over 8m36s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m36s (x5 over 8m36s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m36s (x5 over 8m36s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m16s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet     Node old-k8s-version-571339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m8s                   kubelet     Node old-k8s-version-571339 status is now: NodeReady
	  Normal  Starting                 8m7s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m54s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s (x8 over 5m54s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s (x7 over 5m54s)  kubelet     Node old-k8s-version-571339 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001026] FS-Cache: O-key=[8] 'fbd5c90000000000'
	[  +0.000688] FS-Cache: N-cookie c=00000042 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000944] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=0000000019a34a10
	[  +0.001112] FS-Cache: N-key=[8] 'fbd5c90000000000'
	[  +0.003583] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000955] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=0000000082ac578f
	[  +0.001051] FS-Cache: O-key=[8] 'fbd5c90000000000'
	[  +0.000746] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001053] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=00000000c300ad87
	[  +0.001076] FS-Cache: N-key=[8] 'fbd5c90000000000'
	[  +2.667407] FS-Cache: Duplicate cookie detected
	[  +0.000857] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001131] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=0000000050caa49d
	[  +0.001172] FS-Cache: O-key=[8] 'fad5c90000000000'
	[  +0.000698] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000911] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=0000000019a34a10
	[  +0.001045] FS-Cache: N-key=[8] 'fad5c90000000000'
	[  +0.373823] FS-Cache: Duplicate cookie detected
	[  +0.000883] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000950] FS-Cache: O-cookie d=000000002b48fe46{9p.inode} n=00000000896a7b32
	[  +0.001046] FS-Cache: O-key=[8] '00d6c90000000000'
	[  +0.000794] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000954] FS-Cache: N-cookie d=000000002b48fe46{9p.inode} n=00000000a0cc2f4a
	[  +0.001057] FS-Cache: N-key=[8] '00d6c90000000000'
	
	
	==> etcd [00fea42a626bc543839c933d2b36dd4155e2329531f2c8a74fa65079753377a9] <==
	raft2024/03/12 00:17:03 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/03/12 00:17:03 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/03/12 00:17:03 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/03/12 00:17:03 INFO: ea7e25599daad906 became leader at term 2
	raft2024/03/12 00:17:03 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-03-12 00:17:03.264126 I | embed: ready to serve client requests
	2024-03-12 00:17:03.265898 I | embed: serving client requests on 192.168.76.2:2379
	2024-03-12 00:17:03.266234 I | etcdserver: setting up the initial cluster version to 3.4
	2024-03-12 00:17:03.267019 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-03-12 00:17:03.267274 I | etcdserver/api: enabled capabilities for version 3.4
	2024-03-12 00:17:03.268342 I | embed: ready to serve client requests
	2024-03-12 00:17:03.273905 I | embed: serving client requests on 127.0.0.1:2379
	2024-03-12 00:17:03.288452 I | etcdserver: published {Name:old-k8s-version-571339 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-03-12 00:17:26.958744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:17:33.221760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:17:43.221760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:17:53.221923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:03.221860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:13.221887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:23.221942 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:33.222989 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:43.221793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:18:53.222060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:19:03.221826 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:19:13.222439 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [227b5f4c3ec0b541f1e734b3a9400260044363214781e1a18f0928e954c98086] <==
	2024-03-12 00:21:38.264632 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:21:48.264533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:21:58.264572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:08.264514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:18.264590 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:28.264534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:38.264631 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:48.264568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:22:58.264646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:08.264600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:18.264710 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:28.264981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:38.272706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:48.264510 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:23:58.264638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:08.264763 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:18.264551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:28.264503 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:38.267755 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:48.264541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:24:58.264649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:25:08.264631 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:25:18.264493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:25:28.264606 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-12 00:25:38.264563 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 00:25:38 up  5:08,  0 users,  load average: 0.39, 1.82, 2.49
	Linux old-k8s-version-571339 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kindnet [1a5414516a6e0b5fd6faf9a04f4428d130258262f60692b0dffc8b7ffc8541a6] <==
	I0312 00:17:31.538796       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0312 00:17:31.538870       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0312 00:17:31.539030       1 main.go:116] setting mtu 1500 for CNI 
	I0312 00:17:31.539041       1 main.go:146] kindnetd IP family: "ipv4"
	I0312 00:17:31.539052       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0312 00:18:01.763035       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0312 00:18:01.780369       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:01.780403       1 main.go:227] handling current node
	I0312 00:18:11.839109       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:11.839137       1 main.go:227] handling current node
	I0312 00:18:21.854115       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:21.854148       1 main.go:227] handling current node
	I0312 00:18:31.860517       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:31.860972       1 main.go:227] handling current node
	I0312 00:18:41.871857       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:41.871888       1 main.go:227] handling current node
	I0312 00:18:51.884995       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:18:51.885025       1 main.go:227] handling current node
	I0312 00:19:01.889659       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:19:01.889696       1 main.go:227] handling current node
	I0312 00:19:11.939839       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:19:11.939867       1 main.go:227] handling current node
	
	
	==> kindnet [ae84159c4657eff5eabe4d3d9526af6b40457144653c8a1c7e1b3bc077bdcad0] <==
	I0312 00:23:29.334974       1 main.go:227] handling current node
	I0312 00:23:39.346212       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:23:39.346244       1 main.go:227] handling current node
	I0312 00:23:49.358740       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:23:49.358768       1 main.go:227] handling current node
	I0312 00:23:59.367557       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:23:59.367586       1 main.go:227] handling current node
	I0312 00:24:09.387180       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:09.387229       1 main.go:227] handling current node
	I0312 00:24:19.392637       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:19.392668       1 main.go:227] handling current node
	I0312 00:24:29.402303       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:29.402335       1 main.go:227] handling current node
	I0312 00:24:39.412349       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:39.412577       1 main.go:227] handling current node
	I0312 00:24:49.422974       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:49.423005       1 main.go:227] handling current node
	I0312 00:24:59.430126       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:24:59.430152       1 main.go:227] handling current node
	I0312 00:25:09.446156       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:25:09.446189       1 main.go:227] handling current node
	I0312 00:25:19.458013       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:25:19.458044       1 main.go:227] handling current node
	I0312 00:25:29.465499       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0312 00:25:29.465543       1 main.go:227] handling current node
	
	
	==> kube-apiserver [022154a50546e744b25648ac078a4535c3a97e91f97547e8008e89235fd126f5] <==
	I0312 00:17:11.466290       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0312 00:17:11.466325       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0312 00:17:11.474572       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0312 00:17:11.480758       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0312 00:17:11.481718       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0312 00:17:11.996122       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0312 00:17:12.059392       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0312 00:17:12.173874       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0312 00:17:12.175259       1 controller.go:606] quota admission added evaluator for: endpoints
	I0312 00:17:12.182037       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0312 00:17:13.197577       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0312 00:17:13.932657       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0312 00:17:13.986416       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0312 00:17:22.454095       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0312 00:17:30.153157       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0312 00:17:30.332538       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0312 00:17:45.643854       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:17:45.643899       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:17:45.643935       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0312 00:18:20.227620       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:18:20.227856       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:18:20.227955       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0312 00:18:57.603267       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:18:57.603338       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:18:57.603347       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [e90658574cccc9b56ea1fd38865b78eb14b34d54f7b6d6f655f8b82d026ee372] <==
	I0312 00:21:49.616082       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:21:49.616091       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0312 00:22:27.392570       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:22:27.392613       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:22:27.392622       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0312 00:22:59.945345       1 handler_proxy.go:102] no RequestInfo found in the context
	E0312 00:22:59.945453       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0312 00:22:59.945470       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0312 00:23:07.476140       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:23:07.476181       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:23:07.476191       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0312 00:23:46.534666       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:23:46.534712       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:23:46.534723       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0312 00:24:21.305367       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:24:21.305410       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:24:21.305420       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0312 00:24:57.400277       1 handler_proxy.go:102] no RequestInfo found in the context
	E0312 00:24:57.400355       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0312 00:24:57.400368       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0312 00:25:02.133077       1 client.go:360] parsed scheme: "passthrough"
	I0312 00:25:02.133125       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0312 00:25:02.133135       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [ac83611721f7a3d26415ed5ae3625edece62f8bd00bc9a63ce61ffa2ad2c9fbc] <==
	I0312 00:17:30.259600       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pd7cs"
	I0312 00:17:30.288792       1 shared_informer.go:247] Caches are synced for resource quota 
	I0312 00:17:30.293761       1 shared_informer.go:247] Caches are synced for HPA 
	I0312 00:17:30.293789       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0312 00:17:30.294857       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0312 00:17:30.368611       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6ptvm"
	I0312 00:17:30.456660       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0312 00:17:30.575836       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tvrz6"
	I0312 00:17:30.669979       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rjqrt"
	E0312 00:17:30.742350       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4dbbd939-c3b2-48ec-904d-f7cb7443f93e", ResourceVersion:"270", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845799433, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001376cc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001376ce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001376d00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400128de40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001376
d20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001376d40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001376d80)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400152e540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000fc1ce8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b35b90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000116220)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000fc1d68)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0312 00:17:30.743598       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0312 00:17:30.743618       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0312 00:17:30.747141       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"b550cbbf-2525-4689-b8bf-859affd27c33", ResourceVersion:"288", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845799434, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240202-8f1494ea\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001376de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001376e00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001376e20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001376e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001376e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001376e80), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240202-8f1494ea", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001376ea0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001376ee0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400152e5a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000fc1f68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b35c00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000116248)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000fc1fb0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0312 00:17:30.758024       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0312 00:17:30.914175       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4dbbd939-c3b2-48ec-904d-f7cb7443f93e", ResourceVersion:"404", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63845799433, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019bcc80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019bcca0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40019bccc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40019bcce0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40019bcd00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001951540), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bcd20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40019bcd40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40019bcd80)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001954d20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400194b5c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004219d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40001177a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400194b618)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0312 00:17:31.234380       1 request.go:655] Throttling request took 1.013767022s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	I0312 00:17:31.936700       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0312 00:17:31.936752       1 shared_informer.go:247] Caches are synced for resource quota 
	I0312 00:17:32.002734       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0312 00:17:32.034474       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-6ptvm"
	I0312 00:17:35.022157       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0312 00:19:13.187252       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0312 00:19:13.306136       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0312 00:19:13.438005       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0312 00:19:13.702847       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [ea472223f1505d76ec6e5c18af4f3ab7760ebdefed097a213c78a396e15d7ba7] <==
	E0312 00:21:15.975987       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:21:19.975257       1 request.go:655] Throttling request took 1.044857219s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0312 00:21:20.828522       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:21:46.477717       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:21:52.478990       1 request.go:655] Throttling request took 1.048353316s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0312 00:21:53.331236       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:22:16.979548       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:22:24.981811       1 request.go:655] Throttling request took 1.048339413s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0312 00:22:25.833113       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:22:47.481451       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:22:57.483568       1 request.go:655] Throttling request took 1.048271219s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0312 00:22:58.335148       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:23:17.983363       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:23:29.986059       1 request.go:655] Throttling request took 1.048464229s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0312 00:23:30.837394       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:23:48.485196       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:24:02.487839       1 request.go:655] Throttling request took 1.048341697s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0312 00:24:03.339388       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:24:18.987473       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:24:34.989723       1 request.go:655] Throttling request took 1.048394888s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0312 00:24:35.841125       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:24:49.489310       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0312 00:25:07.491585       1 request.go:655] Throttling request took 1.048277767s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0312 00:25:08.343138       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0312 00:25:19.991376       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [46d69c632200b08b2b8f94cd051969df887d9d074acace5d606fef37cc84295e] <==
	I0312 00:19:59.631896       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0312 00:19:59.632063       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0312 00:19:59.655911       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0312 00:19:59.656200       1 server_others.go:185] Using iptables Proxier.
	I0312 00:19:59.656568       1 server.go:650] Version: v1.20.0
	I0312 00:19:59.657429       1 config.go:315] Starting service config controller
	I0312 00:19:59.657588       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0312 00:19:59.657686       1 config.go:224] Starting endpoint slice config controller
	I0312 00:19:59.657775       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0312 00:19:59.758829       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0312 00:19:59.766925       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [8abc9a2fec8f5340e92089f46c5ff2bf798571fbcb6c7ce9545d0e353715bed4] <==
	I0312 00:17:31.669418       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0312 00:17:31.669524       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0312 00:17:31.693844       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0312 00:17:31.693933       1 server_others.go:185] Using iptables Proxier.
	I0312 00:17:31.694132       1 server.go:650] Version: v1.20.0
	I0312 00:17:31.694625       1 config.go:315] Starting service config controller
	I0312 00:17:31.694633       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0312 00:17:31.696431       1 config.go:224] Starting endpoint slice config controller
	I0312 00:17:31.696443       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0312 00:17:31.794750       1 shared_informer.go:247] Caches are synced for service config 
	I0312 00:17:31.796563       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [c3f64500c09efd7fdf78260f3bef5ed1adaefa3e3a847a7540726cbee6bd042f] <==
	I0312 00:17:05.858423       1 serving.go:331] Generated self-signed cert in-memory
	W0312 00:17:10.679497       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0312 00:17:10.683371       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0312 00:17:10.683580       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0312 00:17:10.683669       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0312 00:17:10.757445       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0312 00:17:10.757500       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0312 00:17:10.770944       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0312 00:17:10.757529       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0312 00:17:10.779100       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0312 00:17:10.779428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0312 00:17:10.779639       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0312 00:17:10.779830       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0312 00:17:10.780033       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0312 00:17:10.780224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0312 00:17:10.780440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0312 00:17:10.785656       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0312 00:17:10.787497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0312 00:17:10.787746       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0312 00:17:10.788632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0312 00:17:10.795407       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0312 00:17:11.595607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0312 00:17:12.371130       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [cda4f1be508f3de4744e406ac4acfcb87143068155c189f0e7506f78db3a42c9] <==
	I0312 00:19:49.872110       1 serving.go:331] Generated self-signed cert in-memory
	W0312 00:19:56.406843       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0312 00:19:56.410203       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0312 00:19:56.410244       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0312 00:19:56.410250       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0312 00:19:56.816131       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0312 00:19:56.819259       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0312 00:19:56.819474       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0312 00:19:56.820827       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0312 00:19:57.019599       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 12 00:23:59 old-k8s-version-571339 kubelet[663]: E0312 00:23:59.134729     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:24:02 old-k8s-version-571339 kubelet[663]: E0312 00:24:02.135523     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:24:11 old-k8s-version-571339 kubelet[663]: I0312 00:24:11.134423     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:24:11 old-k8s-version-571339 kubelet[663]: E0312 00:24:11.134786     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:24:16 old-k8s-version-571339 kubelet[663]: E0312 00:24:16.135140     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:24:22 old-k8s-version-571339 kubelet[663]: I0312 00:24:22.135005     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:24:22 old-k8s-version-571339 kubelet[663]: E0312 00:24:22.135430     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:24:31 old-k8s-version-571339 kubelet[663]: E0312 00:24:31.135197     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:24:35 old-k8s-version-571339 kubelet[663]: I0312 00:24:35.134419     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:24:35 old-k8s-version-571339 kubelet[663]: E0312 00:24:35.134852     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:24:43 old-k8s-version-571339 kubelet[663]: E0312 00:24:43.135594     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:24:47 old-k8s-version-571339 kubelet[663]: I0312 00:24:47.134423     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:24:47 old-k8s-version-571339 kubelet[663]: E0312 00:24:47.134779     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:24:56 old-k8s-version-571339 kubelet[663]: E0312 00:24:56.137730     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: I0312 00:24:59.134388     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:24:59 old-k8s-version-571339 kubelet[663]: E0312 00:24:59.134749     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:25:07 old-k8s-version-571339 kubelet[663]: E0312 00:25:07.135133     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: I0312 00:25:10.134529     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:25:10 old-k8s-version-571339 kubelet[663]: E0312 00:25:10.134865     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:25:19 old-k8s-version-571339 kubelet[663]: E0312 00:25:19.135515     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: I0312 00:25:22.135847     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:25:22 old-k8s-version-571339 kubelet[663]: E0312 00:25:22.136716     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	Mar 12 00:25:34 old-k8s-version-571339 kubelet[663]: E0312 00:25:34.135529     663 pod_workers.go:191] Error syncing pod f8f3ac0e-3c2e-451d-8e5a-d3937eab4142 ("metrics-server-9975d5f86-c87xf_kube-system(f8f3ac0e-3c2e-451d-8e5a-d3937eab4142)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 12 00:25:37 old-k8s-version-571339 kubelet[663]: I0312 00:25:37.134503     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: d96f9b6b28f75a14646275fa9003956dbbbade5886c46b60dd834a36084849f7
	Mar 12 00:25:37 old-k8s-version-571339 kubelet[663]: E0312 00:25:37.134884     663 pod_workers.go:191] Error syncing pod 917984d8-3e49-416e-bb76-594879112f42 ("dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-5w7h2_kubernetes-dashboard(917984d8-3e49-416e-bb76-594879112f42)"
	
	
	==> kubernetes-dashboard [cb775d92b00e5c14170849b1b42ccfd48f3c9d18c9b5da2f8234588eaf4aa2ec] <==
	2024/03/12 00:20:21 Using namespace: kubernetes-dashboard
	2024/03/12 00:20:21 Using in-cluster config to connect to apiserver
	2024/03/12 00:20:21 Using secret token for csrf signing
	2024/03/12 00:20:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/12 00:20:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/12 00:20:21 Successful initial request to the apiserver, version: v1.20.0
	2024/03/12 00:20:21 Generating JWE encryption key
	2024/03/12 00:20:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/12 00:20:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/12 00:20:23 Initializing JWE encryption key from synchronized object
	2024/03/12 00:20:23 Creating in-cluster Sidecar client
	2024/03/12 00:20:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:20:23 Serving insecurely on HTTP port: 9090
	2024/03/12 00:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:21:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:22:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:22:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:23:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:23:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:24:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:24:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:25:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/12 00:20:21 Starting overwatch
	
	
	==> storage-provisioner [0c3514f843ab806b20582bed37c5a7606b322a3eae956ca0d2a4c8b59c7beb86] <==
	I0312 00:19:59.274477       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0312 00:20:29.276251       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [98a1386a1a083a30c283c882c4ad3a528364088aba6315aa3bd42ba324436879] <==
	I0312 00:20:43.239813       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0312 00:20:43.256311       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0312 00:20:43.256518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0312 00:21:00.749561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0312 00:21:00.751690       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-571339_e3d1ccc1-62a9-403f-a28e-d96e346a4c93!
	I0312 00:21:00.751945       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"10706e07-fea2-49a4-8ad4-85b3effe5dc7", APIVersion:"v1", ResourceVersion:"842", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-571339_e3d1ccc1-62a9-403f-a28e-d96e346a4c93 became leader
	I0312 00:21:00.854802       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-571339_e3d1ccc1-62a9-403f-a28e-d96e346a4c93!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-571339 -n old-k8s-version-571339
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-571339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-c87xf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-571339 describe pod metrics-server-9975d5f86-c87xf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-571339 describe pod metrics-server-9975d5f86-c87xf: exit status 1 (101.009872ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-c87xf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-571339 describe pod metrics-server-9975d5f86-c87xf: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (372.92s)

                                                
                                    

Test pass (297/335)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 7.37
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.2
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.24
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.41
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.37
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.26
30 TestBinaryMirror 0.58
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 115.5
38 TestAddons/parallel/Registry 16.61
40 TestAddons/parallel/InspektorGadget 12.02
41 TestAddons/parallel/MetricsServer 5.88
44 TestAddons/parallel/CSI 81.93
45 TestAddons/parallel/Headlamp 11.58
46 TestAddons/parallel/CloudSpanner 5.66
47 TestAddons/parallel/LocalPath 51.36
48 TestAddons/parallel/NvidiaDevicePlugin 6.57
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.17
53 TestAddons/StoppedEnableDisable 12.28
54 TestCertOptions 34.2
55 TestCertExpiration 230.13
57 TestForceSystemdFlag 43.32
58 TestForceSystemdEnv 42.32
59 TestDockerEnvContainerd 49.71
64 TestErrorSpam/setup 32.77
65 TestErrorSpam/start 0.78
66 TestErrorSpam/status 1.01
67 TestErrorSpam/pause 1.67
68 TestErrorSpam/unpause 1.78
69 TestErrorSpam/stop 1.54
72 TestFunctional/serial/CopySyncFile 0.01
73 TestFunctional/serial/StartWithProxy 54.86
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 6.3
76 TestFunctional/serial/KubeContext 0.06
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4
81 TestFunctional/serial/CacheCmd/cache/add_local 1.51
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.05
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.15
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
89 TestFunctional/serial/ExtraConfig 46.68
90 TestFunctional/serial/ComponentHealth 0.1
91 TestFunctional/serial/LogsCmd 1.74
92 TestFunctional/serial/LogsFileCmd 1.77
93 TestFunctional/serial/InvalidService 5.01
95 TestFunctional/parallel/ConfigCmd 0.59
96 TestFunctional/parallel/DashboardCmd 12.4
97 TestFunctional/parallel/DryRun 0.53
98 TestFunctional/parallel/InternationalLanguage 0.25
99 TestFunctional/parallel/StatusCmd 1.43
103 TestFunctional/parallel/ServiceCmdConnect 9.76
104 TestFunctional/parallel/AddonsCmd 0.26
105 TestFunctional/parallel/PersistentVolumeClaim 27.43
107 TestFunctional/parallel/SSHCmd 0.66
108 TestFunctional/parallel/CpCmd 2.32
110 TestFunctional/parallel/FileSync 0.34
111 TestFunctional/parallel/CertSync 2.22
115 TestFunctional/parallel/NodeLabels 0.09
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
119 TestFunctional/parallel/License 0.33
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.5
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.26
132 TestFunctional/parallel/ServiceCmd/List 0.51
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
136 TestFunctional/parallel/ProfileCmd/profile_list 0.6
137 TestFunctional/parallel/ServiceCmd/Format 0.5
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
139 TestFunctional/parallel/ServiceCmd/URL 0.5
140 TestFunctional/parallel/MountCmd/any-port 7.02
141 TestFunctional/parallel/MountCmd/specific-port 2.36
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.88
143 TestFunctional/parallel/Version/short 0.1
144 TestFunctional/parallel/Version/components 1.34
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.67
150 TestFunctional/parallel/ImageCommands/Setup 1.61
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.39
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
161 TestFunctional/delete_addon-resizer_images 0.08
162 TestFunctional/delete_my-image_image 0.02
163 TestFunctional/delete_minikube_cached_images 0.02
167 TestMutliControlPlane/serial/StartCluster 130.68
168 TestMutliControlPlane/serial/DeployApp 20.36
169 TestMutliControlPlane/serial/PingHostFromPods 1.75
170 TestMutliControlPlane/serial/AddWorkerNode 26.11
171 TestMutliControlPlane/serial/NodeLabels 0.11
172 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.76
173 TestMutliControlPlane/serial/CopyFile 19.5
174 TestMutliControlPlane/serial/StopSecondaryNode 12.89
175 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
176 TestMutliControlPlane/serial/RestartSecondaryNode 18.83
177 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
178 TestMutliControlPlane/serial/RestartClusterKeepsNodes 111.88
179 TestMutliControlPlane/serial/DeleteSecondaryNode 11.45
180 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
181 TestMutliControlPlane/serial/StopCluster 35.95
182 TestMutliControlPlane/serial/RestartCluster 76.68
183 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.63
184 TestMutliControlPlane/serial/AddSecondaryNode 39.75
185 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
189 TestJSONOutput/start/Command 58.36
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.75
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.66
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.77
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.25
214 TestKicCustomNetwork/create_custom_network 38.6
215 TestKicCustomNetwork/use_default_bridge_network 34.9
216 TestKicExistingNetwork 34.53
217 TestKicCustomSubnet 31.96
218 TestKicStaticIP 37.56
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 70.82
223 TestMountStart/serial/StartWithMountFirst 6.05
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 6.73
226 TestMountStart/serial/VerifyMountSecond 0.28
227 TestMountStart/serial/DeleteFirst 1.62
228 TestMountStart/serial/VerifyMountPostDelete 0.27
229 TestMountStart/serial/Stop 1.24
230 TestMountStart/serial/RestartStopped 7.36
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 79.98
235 TestMultiNode/serial/DeployApp2Nodes 4.93
236 TestMultiNode/serial/PingHostFrom2Pods 1.09
237 TestMultiNode/serial/AddNode 18.74
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.35
241 TestMultiNode/serial/StopNode 2.31
242 TestMultiNode/serial/StartAfterStop 9.37
243 TestMultiNode/serial/RestartKeepsNodes 89.45
244 TestMultiNode/serial/DeleteNode 5.43
245 TestMultiNode/serial/StopMultiNode 24.04
246 TestMultiNode/serial/RestartMultiNode 55.56
247 TestMultiNode/serial/ValidateNameConflict 36.67
252 TestPreload 119.35
254 TestScheduledStopUnix 106.12
257 TestInsufficientStorage 10.5
258 TestRunningBinaryUpgrade 80.96
260 TestKubernetesUpgrade 393.66
261 TestMissingContainerUpgrade 145.7
263 TestPause/serial/Start 68.85
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
266 TestNoKubernetes/serial/StartWithK8s 42.65
267 TestNoKubernetes/serial/StartWithStopK8s 16.48
268 TestNoKubernetes/serial/Start 5.38
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
270 TestNoKubernetes/serial/ProfileList 1.06
271 TestNoKubernetes/serial/Stop 1.23
272 TestNoKubernetes/serial/StartNoArgs 7.88
273 TestPause/serial/SecondStartNoReconfiguration 6.35
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
275 TestPause/serial/Pause 1.03
276 TestPause/serial/VerifyStatus 0.32
277 TestPause/serial/Unpause 0.76
278 TestPause/serial/PauseAgain 1.05
279 TestPause/serial/DeletePaused 2.83
280 TestPause/serial/VerifyDeletedResources 0.19
281 TestStoppedBinaryUpgrade/Setup 1.19
282 TestStoppedBinaryUpgrade/Upgrade 110.07
283 TestStoppedBinaryUpgrade/MinikubeLogs 1.23
298 TestNetworkPlugins/group/false 6.87
303 TestStartStop/group/old-k8s-version/serial/FirstStart 152.66
304 TestStartStop/group/old-k8s-version/serial/DeployApp 8.86
306 TestStartStop/group/no-preload/serial/FirstStart 77.74
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.56
308 TestStartStop/group/old-k8s-version/serial/Stop 13.6
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
311 TestStartStop/group/no-preload/serial/DeployApp 9.39
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
313 TestStartStop/group/no-preload/serial/Stop 12.11
314 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
315 TestStartStop/group/no-preload/serial/SecondStart 289.54
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.13
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
320 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
322 TestStartStop/group/old-k8s-version/serial/Pause 4.2
323 TestStartStop/group/no-preload/serial/Pause 4.31
325 TestStartStop/group/embed-certs/serial/FirstStart 71.13
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.41
328 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
329 TestStartStop/group/embed-certs/serial/DeployApp 8.42
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.54
331 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.66
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.09
333 TestStartStop/group/embed-certs/serial/Stop 12.13
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 273.03
336 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
337 TestStartStop/group/embed-certs/serial/SecondStart 303.75
338 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
341 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
343 TestStartStop/group/newest-cni/serial/FirstStart 52.36
344 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
345 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
346 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
347 TestStartStop/group/embed-certs/serial/Pause 4.78
348 TestNetworkPlugins/group/auto/Start 72.83
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.67
351 TestStartStop/group/newest-cni/serial/Stop 1.36
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
353 TestStartStop/group/newest-cni/serial/SecondStart 22.05
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
357 TestStartStop/group/newest-cni/serial/Pause 3.35
358 TestNetworkPlugins/group/kindnet/Start 61.34
359 TestNetworkPlugins/group/auto/KubeletFlags 0.48
360 TestNetworkPlugins/group/auto/NetCatPod 10.45
361 TestNetworkPlugins/group/auto/DNS 0.23
362 TestNetworkPlugins/group/auto/Localhost 0.2
363 TestNetworkPlugins/group/auto/HairPin 0.17
364 TestNetworkPlugins/group/calico/Start 83.18
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
367 TestNetworkPlugins/group/kindnet/NetCatPod 9.32
368 TestNetworkPlugins/group/kindnet/DNS 0.32
369 TestNetworkPlugins/group/kindnet/Localhost 0.31
370 TestNetworkPlugins/group/kindnet/HairPin 0.23
371 TestNetworkPlugins/group/custom-flannel/Start 63.2
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.29
374 TestNetworkPlugins/group/calico/NetCatPod 11.36
375 TestNetworkPlugins/group/calico/DNS 0.27
376 TestNetworkPlugins/group/calico/Localhost 0.2
377 TestNetworkPlugins/group/calico/HairPin 0.17
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.42
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.37
380 TestNetworkPlugins/group/custom-flannel/DNS 0.25
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
383 TestNetworkPlugins/group/enable-default-cni/Start 91.15
384 TestNetworkPlugins/group/flannel/Start 63.99
385 TestNetworkPlugins/group/flannel/ControllerPod 6.01
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.31
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
389 TestNetworkPlugins/group/flannel/NetCatPod 9.28
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
393 TestNetworkPlugins/group/flannel/DNS 0.21
394 TestNetworkPlugins/group/flannel/Localhost 0.16
395 TestNetworkPlugins/group/flannel/HairPin 0.16
396 TestNetworkPlugins/group/bridge/Start 48.72
397 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
398 TestNetworkPlugins/group/bridge/NetCatPod 10.25
399 TestNetworkPlugins/group/bridge/DNS 0.17
400 TestNetworkPlugins/group/bridge/Localhost 0.17
401 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-120081 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-120081 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.202181885s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-120081
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-120081: exit status 85 (83.120214ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-120081 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |          |
	|         | -p download-only-120081        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 23:33:17
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 23:33:17.433216  987691 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:33:17.433356  987691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:17.433367  987691 out.go:304] Setting ErrFile to fd 2...
	I0311 23:33:17.433372  987691 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:17.433621  987691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	W0311 23:33:17.433764  987691 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18358-982285/.minikube/config/config.json: open /home/jenkins/minikube-integration/18358-982285/.minikube/config/config.json: no such file or directory
	I0311 23:33:17.434225  987691 out.go:298] Setting JSON to true
	I0311 23:33:17.435110  987691 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15345,"bootTime":1710184652,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:33:17.435182  987691 start.go:139] virtualization:  
	I0311 23:33:17.438187  987691 out.go:97] [download-only-120081] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 23:33:17.440276  987691 out.go:169] MINIKUBE_LOCATION=18358
	W0311 23:33:17.438412  987691 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball: no such file or directory
	I0311 23:33:17.438464  987691 notify.go:220] Checking for updates...
	I0311 23:33:17.444288  987691 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:33:17.445802  987691 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:33:17.447875  987691 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:33:17.449796  987691 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 23:33:17.453326  987691 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 23:33:17.453594  987691 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:33:17.474293  987691 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:33:17.474419  987691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:17.550742  987691 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 23:33:17.541408169 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:17.550849  987691 docker.go:295] overlay module found
	I0311 23:33:17.552905  987691 out.go:97] Using the docker driver based on user configuration
	I0311 23:33:17.552949  987691 start.go:297] selected driver: docker
	I0311 23:33:17.552956  987691 start.go:901] validating driver "docker" against <nil>
	I0311 23:33:17.553095  987691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:17.623853  987691 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-11 23:33:17.614563369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:17.624022  987691 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 23:33:17.624322  987691 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 23:33:17.624474  987691 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 23:33:17.626499  987691 out.go:169] Using Docker driver with root privileges
	I0311 23:33:17.628368  987691 cni.go:84] Creating CNI manager for ""
	I0311 23:33:17.628404  987691 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:33:17.628420  987691 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 23:33:17.628515  987691 start.go:340] cluster config:
	{Name:download-only-120081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-120081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:33:17.630387  987691 out.go:97] Starting "download-only-120081" primary control-plane node in "download-only-120081" cluster
	I0311 23:33:17.630413  987691 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 23:33:17.632179  987691 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 23:33:17.632212  987691 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 23:33:17.632307  987691 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 23:33:17.646876  987691 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 23:33:17.647063  987691 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 23:33:17.647173  987691 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 23:33:17.701060  987691 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0311 23:33:17.701107  987691 cache.go:56] Caching tarball of preloaded images
	I0311 23:33:17.701746  987691 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0311 23:33:17.703846  987691 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0311 23:33:17.703872  987691 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0311 23:33:17.830366  987691 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-120081 host does not exist
	  To start a cluster, run: "minikube start -p download-only-120081"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-120081
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-080906 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-080906 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.368999468s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-080906
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-080906: exit status 85 (80.585718ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-120081 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-120081        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-120081        | download-only-120081 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | -o=json --download-only        | download-only-080906 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-080906        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 23:33:27
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 23:33:27.073203  987851 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:33:27.073380  987851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:27.073402  987851 out.go:304] Setting ErrFile to fd 2...
	I0311 23:33:27.073423  987851 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:27.073684  987851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:33:27.074108  987851 out.go:298] Setting JSON to true
	I0311 23:33:27.074987  987851 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15355,"bootTime":1710184652,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:33:27.075087  987851 start.go:139] virtualization:  
	I0311 23:33:27.078030  987851 out.go:97] [download-only-080906] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 23:33:27.080583  987851 out.go:169] MINIKUBE_LOCATION=18358
	I0311 23:33:27.078334  987851 notify.go:220] Checking for updates...
	I0311 23:33:27.083040  987851 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:33:27.085346  987851 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:33:27.087709  987851 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:33:27.089849  987851 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 23:33:27.094030  987851 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 23:33:27.094340  987851 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:33:27.116893  987851 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:33:27.117023  987851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:27.180191  987851 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 23:33:27.170067797 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:27.180304  987851 docker.go:295] overlay module found
	I0311 23:33:27.182075  987851 out.go:97] Using the docker driver based on user configuration
	I0311 23:33:27.182103  987851 start.go:297] selected driver: docker
	I0311 23:33:27.182109  987851 start.go:901] validating driver "docker" against <nil>
	I0311 23:33:27.182233  987851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:27.233558  987851 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:50 SystemTime:2024-03-11 23:33:27.224489357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:27.233738  987851 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 23:33:27.234031  987851 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 23:33:27.234182  987851 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 23:33:27.236220  987851 out.go:169] Using Docker driver with root privileges
	I0311 23:33:27.238365  987851 cni.go:84] Creating CNI manager for ""
	I0311 23:33:27.238383  987851 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:33:27.238394  987851 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 23:33:27.238474  987851 start.go:340] cluster config:
	{Name:download-only-080906 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-080906 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:33:27.240619  987851 out.go:97] Starting "download-only-080906" primary control-plane node in "download-only-080906" cluster
	I0311 23:33:27.240654  987851 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 23:33:27.242746  987851 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 23:33:27.242770  987851 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 23:33:27.242941  987851 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 23:33:27.257552  987851 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 23:33:27.257690  987851 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 23:33:27.257712  987851 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 23:33:27.257717  987851 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 23:33:27.257729  987851 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 23:33:27.314643  987851 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	I0311 23:33:27.314671  987851 cache.go:56] Caching tarball of preloaded images
	I0311 23:33:27.314845  987851 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0311 23:33:27.316949  987851 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0311 23:33:27.316976  987851 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4 ...
	I0311 23:33:27.424684  987851 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4?checksum=md5:cc2d75db20c4d651f0460755d6df7b03 -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-080906 host does not exist
	  To start a cluster, run: "minikube start -p download-only-080906"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-080906
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-667507 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-667507 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.236289454s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-667507
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-667507: exit status 85 (406.196895ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-120081 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-120081           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-120081           | download-only-120081 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | -o=json --download-only           | download-only-080906 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-080906           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| delete  | -p download-only-080906           | download-only-080906 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC | 11 Mar 24 23:33 UTC |
	| start   | -o=json --download-only           | download-only-667507 | jenkins | v1.32.0 | 11 Mar 24 23:33 UTC |                     |
	|         | -p download-only-667507           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/11 23:33:34
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0311 23:33:34.867584  988010 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:33:34.867786  988010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:34.867810  988010 out.go:304] Setting ErrFile to fd 2...
	I0311 23:33:34.867827  988010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:33:34.868091  988010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:33:34.868543  988010 out.go:298] Setting JSON to true
	I0311 23:33:34.869431  988010 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15363,"bootTime":1710184652,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:33:34.869519  988010 start.go:139] virtualization:  
	I0311 23:33:34.871907  988010 out.go:97] [download-only-667507] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 23:33:34.874083  988010 out.go:169] MINIKUBE_LOCATION=18358
	I0311 23:33:34.872175  988010 notify.go:220] Checking for updates...
	I0311 23:33:34.878061  988010 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:33:34.879946  988010 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:33:34.881649  988010 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:33:34.883451  988010 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0311 23:33:34.887208  988010 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0311 23:33:34.887498  988010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:33:34.910389  988010 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:33:34.910510  988010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:34.976086  988010 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 23:33:34.967122437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:34.976222  988010 docker.go:295] overlay module found
	I0311 23:33:34.978474  988010 out.go:97] Using the docker driver based on user configuration
	I0311 23:33:34.978500  988010 start.go:297] selected driver: docker
	I0311 23:33:34.978511  988010 start.go:901] validating driver "docker" against <nil>
	I0311 23:33:34.978648  988010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:33:35.033657  988010 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-11 23:33:35.024374429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:33:35.033832  988010 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0311 23:33:35.034121  988010 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0311 23:33:35.034297  988010 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0311 23:33:35.036619  988010 out.go:169] Using Docker driver with root privileges
	I0311 23:33:35.038524  988010 cni.go:84] Creating CNI manager for ""
	I0311 23:33:35.038550  988010 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0311 23:33:35.038561  988010 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0311 23:33:35.038660  988010 start.go:340] cluster config:
	{Name:download-only-667507 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-667507 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0
s}
	I0311 23:33:35.040690  988010 out.go:97] Starting "download-only-667507" primary control-plane node in "download-only-667507" cluster
	I0311 23:33:35.040726  988010 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0311 23:33:35.042676  988010 out.go:97] Pulling base image v0.0.42-1708944392-18244 ...
	I0311 23:33:35.042722  988010 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0311 23:33:35.042849  988010 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local docker daemon
	I0311 23:33:35.059812  988010 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 to local cache
	I0311 23:33:35.059942  988010 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory
	I0311 23:33:35.059965  988010 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 in local cache directory, skipping pull
	I0311 23:33:35.059970  988010 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 exists in cache, skipping pull
	I0311 23:33:35.059977  988010 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 as a tarball
	I0311 23:33:35.108764  988010 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	I0311 23:33:35.108792  988010 cache.go:56] Caching tarball of preloaded images
	I0311 23:33:35.108969  988010 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0311 23:33:35.111557  988010 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0311 23:33:35.111594  988010 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4 ...
	I0311 23:33:35.218149  988010 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4?checksum=md5:adc883bf092a67b4673b5b5787f99b2f -> /home/jenkins/minikube-integration/18358-982285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-667507 host does not exist
	  To start a cluster, run: "minikube start -p download-only-667507"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-667507
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-718612 --alsologtostderr --binary-mirror http://127.0.0.1:35069 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-718612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-718612
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-340965
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-340965: exit status 85 (78.233258ms)

                                                
                                                
-- stdout --
	* Profile "addons-340965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-340965
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-340965: exit status 85 (87.16063ms)

                                                
                                                
-- stdout --
	* Profile "addons-340965" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340965"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (115.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-340965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-340965 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (1m55.495842277s)
--- PASS: TestAddons/Setup (115.50s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 45.31921ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-vzlg2" [2b2010b4-72b4-4529-9c91-720efb092e0c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005819877s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2nf5b" [94842744-4fc5-4400-b1de-06c2a38939d2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005208343s
addons_test.go:340: (dbg) Run:  kubectl --context addons-340965 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-340965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-340965 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.481115996s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 ip
2024/03/11 23:35:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lhzbs" [3377d867-234e-43eb-b461-05190ad34299] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004503868s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-340965
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-340965: (6.01630986s)
--- PASS: TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 8.734453ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-nsv6v" [c6cf4532-bd58-4c32-91d0-ecc672ae77af] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006326684s
addons_test.go:415: (dbg) Run:  kubectl --context addons-340965 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (81.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 46.088277ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-340965 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-340965 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [92695f47-f16a-41cd-8611-fc76ebefad86] Pending
helpers_test.go:344: "task-pv-pod" [92695f47-f16a-41cd-8611-fc76ebefad86] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [92695f47-f16a-41cd-8611-fc76ebefad86] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.007952274s
addons_test.go:584: (dbg) Run:  kubectl --context addons-340965 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340965 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-340965 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-340965 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-340965 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-340965 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7] Pending
helpers_test.go:344: "task-pv-pod-restore" [bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bed5cedb-78c3-4b91-bbad-9ed4c6b2c9e7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004186181s
addons_test.go:626: (dbg) Run:  kubectl --context addons-340965 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-340965 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-340965 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-340965 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.817934967s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (81.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-340965 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-340965 --alsologtostderr -v=1: (1.572972798s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-4k8cg" [de99f97b-da12-4ede-8add-bfdba0efcf79] Pending
helpers_test.go:344: "headlamp-5485c556b-4k8cg" [de99f97b-da12-4ede-8add-bfdba0efcf79] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-4k8cg" [de99f97b-da12-4ede-8add-bfdba0efcf79] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003732213s
--- PASS: TestAddons/parallel/Headlamp (11.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-k5g4w" [edff68cf-61a8-4e23-a94a-ce55d0a42652] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005997858s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-340965
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-340965 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-340965 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6ccd7810-a84b-4a89-9a14-d0101943609f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6ccd7810-a84b-4a89-9a14-d0101943609f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6ccd7810-a84b-4a89-9a14-d0101943609f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003881139s
addons_test.go:891: (dbg) Run:  kubectl --context addons-340965 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 ssh "cat /opt/local-path-provisioner/pvc-4f4ee275-c05f-4f2f-a04f-01bb18f0ca33_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-340965 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-340965 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-340965 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-340965 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.202571534s)
--- PASS: TestAddons/parallel/LocalPath (51.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zdvjj" [e64e6a9f-0ea4-4a0a-99d1-b04f1decd16f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00412238s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-340965
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-8nc7j" [50ebcd86-9697-4d10-aa0d-e569aabc78d9] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004191703s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-340965 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-340965 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-340965
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-340965: (11.986929816s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-340965
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-340965
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-340965
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (34.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-844120 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-844120 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (31.520071679s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-844120 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-844120 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-844120 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-844120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-844120
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-844120: (2.028525554s)
--- PASS: TestCertOptions (34.20s)

                                                
                                    
x
+
TestCertExpiration (230.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-627308 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0312 00:15:41.351419  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-627308 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.721873955s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-627308 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-627308 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.048190284s)
helpers_test.go:175: Cleaning up "cert-expiration-627308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-627308
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-627308: (2.361897635s)
--- PASS: TestCertExpiration (230.13s)

                                                
                                    
x
+
TestForceSystemdFlag (43.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-356624 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-356624 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.390927613s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-356624 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-356624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-356624
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-356624: (2.481285352s)
--- PASS: TestForceSystemdFlag (43.32s)

                                                
                                    
x
+
TestForceSystemdEnv (42.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-074205 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-074205 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.814860445s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-074205 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-074205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-074205
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-074205: (2.095073422s)
--- PASS: TestForceSystemdEnv (42.32s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.71s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-342736 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-342736 --driver=docker  --container-runtime=containerd: (33.719366456s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-342736"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-342736": (1.249658811s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9BHL52EbAUPR/agent.1004845" SSH_AGENT_PID="1004846" DOCKER_HOST=ssh://docker@127.0.0.1:33907 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9BHL52EbAUPR/agent.1004845" SSH_AGENT_PID="1004846" DOCKER_HOST=ssh://docker@127.0.0.1:33907 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9BHL52EbAUPR/agent.1004845" SSH_AGENT_PID="1004846" DOCKER_HOST=ssh://docker@127.0.0.1:33907 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.250888488s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9BHL52EbAUPR/agent.1004845" SSH_AGENT_PID="1004846" DOCKER_HOST=ssh://docker@127.0.0.1:33907 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-342736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-342736
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-342736: (2.004026888s)
--- PASS: TestDockerEnvContainerd (49.71s)

                                                
                                    
x
+
TestErrorSpam/setup (32.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-435342 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-435342 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-435342 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-435342 --driver=docker  --container-runtime=containerd: (32.766520692s)
--- PASS: TestErrorSpam/setup (32.77s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 pause
--- PASS: TestErrorSpam/pause (1.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 unpause
--- PASS: TestErrorSpam/unpause (1.78s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 stop: (1.269875558s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-435342 --log_dir /tmp/nospam-435342 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18358-982285/.minikube/files/etc/test/nested/copy/987686/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0311 23:40:41.352397  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.359790  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.370066  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.390641  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.430887  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.511175  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.671627  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:41.992166  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:42.633091  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:43.913614  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:46.473901  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0311 23:40:51.594215  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-270400 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (54.857008636s)
--- PASS: TestFunctional/serial/StartWithProxy (54.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --alsologtostderr -v=8
E0311 23:41:01.834860  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-270400 --alsologtostderr -v=8: (6.29821351s)
functional_test.go:659: soft start took 6.30096058s for "functional-270400" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-270400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:3.1: (1.443095198s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:3.3: (1.336773123s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 cache add registry.k8s.io/pause:latest: (1.22443536s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-270400 /tmp/TestFunctionalserialCacheCmdcacheadd_local295073819/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache add minikube-local-cache-test:functional-270400
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache delete minikube-local-cache-test:functional-270400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-270400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.862723ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 cache reload: (1.121067679s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 kubectl -- --context functional-270400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-270400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0311 23:41:22.315108  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-270400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.676713846s)
functional_test.go:757: restart took 46.676824284s for "functional-270400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-270400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 logs: (1.740690185s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 logs --file /tmp/TestFunctionalserialLogsFileCmd2694135545/001/logs.txt
E0311 23:42:03.275246  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 logs --file /tmp/TestFunctionalserialLogsFileCmd2694135545/001/logs.txt: (1.768751183s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-270400 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-270400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-270400: exit status 115 (671.716414ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31093 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-270400 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-270400 delete -f testdata/invalidsvc.yaml: (1.081082115s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 config get cpus: exit status 14 (99.658566ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 config get cpus: exit status 14 (120.500955ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-270400 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-270400 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1018964: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.40s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-270400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (276.313963ms)

                                                
                                                
-- stdout --
	* [functional-270400] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 23:42:43.004591 1018614 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:42:43.004788 1018614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:42:43.004797 1018614 out.go:304] Setting ErrFile to fd 2...
	I0311 23:42:43.004802 1018614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:42:43.005080 1018614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:42:43.005545 1018614 out.go:298] Setting JSON to false
	I0311 23:42:43.006639 1018614 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15911,"bootTime":1710184652,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:42:43.006726 1018614 start.go:139] virtualization:  
	I0311 23:42:43.009921 1018614 out.go:177] * [functional-270400] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0311 23:42:43.012412 1018614 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 23:42:43.012622 1018614 notify.go:220] Checking for updates...
	I0311 23:42:43.016436 1018614 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:42:43.019859 1018614 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:42:43.022819 1018614 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:42:43.025289 1018614 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 23:42:43.027260 1018614 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 23:42:43.030462 1018614 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:42:43.031134 1018614 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:42:43.060039 1018614 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:42:43.060168 1018614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:42:43.185529 1018614 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 23:42:43.172401553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:42:43.185656 1018614 docker.go:295] overlay module found
	I0311 23:42:43.189197 1018614 out.go:177] * Using the docker driver based on existing profile
	I0311 23:42:43.191101 1018614 start.go:297] selected driver: docker
	I0311 23:42:43.191121 1018614 start.go:901] validating driver "docker" against &{Name:functional-270400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-270400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:42:43.191240 1018614 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 23:42:43.194183 1018614 out.go:177] 
	W0311 23:42:43.196362 1018614 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0311 23:42:43.198660 1018614 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-270400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-270400 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (244.788386ms)

                                                
                                                
-- stdout --
	* [functional-270400] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 23:42:42.786824 1018573 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:42:42.786994 1018573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:42:42.787017 1018573 out.go:304] Setting ErrFile to fd 2...
	I0311 23:42:42.787038 1018573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:42:42.787504 1018573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:42:42.788009 1018573 out.go:298] Setting JSON to false
	I0311 23:42:42.789092 1018573 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":15911,"bootTime":1710184652,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0311 23:42:42.789194 1018573 start.go:139] virtualization:  
	I0311 23:42:42.791525 1018573 out.go:177] * [functional-270400] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0311 23:42:42.794292 1018573 out.go:177]   - MINIKUBE_LOCATION=18358
	I0311 23:42:42.794304 1018573 notify.go:220] Checking for updates...
	I0311 23:42:42.798342 1018573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0311 23:42:42.800764 1018573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0311 23:42:42.802934 1018573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0311 23:42:42.805044 1018573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0311 23:42:42.807054 1018573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0311 23:42:42.809483 1018573 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:42:42.809993 1018573 driver.go:392] Setting default libvirt URI to qemu:///system
	I0311 23:42:42.836102 1018573 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0311 23:42:42.836226 1018573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:42:42.908242 1018573 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-11 23:42:42.898650233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:42:42.908349 1018573 docker.go:295] overlay module found
	I0311 23:42:42.910766 1018573 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0311 23:42:42.912698 1018573 start.go:297] selected driver: docker
	I0311 23:42:42.912729 1018573 start.go:901] validating driver "docker" against &{Name:functional-270400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1708944392-18244@sha256:8610dac8144c3f59a6cf50871eb10395cea122e148262744231a04c396033b08 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-270400 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0311 23:42:42.912855 1018573 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0311 23:42:42.915611 1018573 out.go:177] 
	W0311 23:42:42.917892 1018573 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0311 23:42:42.919581 1018573 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-270400 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-270400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-k22rd" [96d21749-8ae9-4550-aadb-9a562dd43708] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-k22rd" [96d21749-8ae9-4550-aadb-9a562dd43708] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004456927s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31344
functional_test.go:1671: http://192.168.49.2:31344: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-k22rd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31344
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [302fa562-4aa2-49c7-a42c-25585be341e5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004763833s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-270400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-270400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-270400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-270400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06243254-fe72-4e54-8e9d-cc9b00d807ea] Pending
helpers_test.go:344: "sp-pod" [06243254-fe72-4e54-8e9d-cc9b00d807ea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06243254-fe72-4e54-8e9d-cc9b00d807ea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004468643s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-270400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-270400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-270400 delete -f testdata/storage-provisioner/pod.yaml: (1.335593135s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-270400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [828ad7b2-9682-451b-8873-0f1e0744adb1] Pending
helpers_test.go:344: "sp-pod" [828ad7b2-9682-451b-8873-0f1e0744adb1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [828ad7b2-9682-451b-8873-0f1e0744adb1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00553005s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-270400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.43s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh -n functional-270400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cp functional-270400:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2386225084/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh -n functional-270400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh -n functional-270400 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/987686/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /etc/test/nested/copy/987686/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/987686.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /etc/ssl/certs/987686.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/987686.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /usr/share/ca-certificates/987686.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9876862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /etc/ssl/certs/9876862.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9876862.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /usr/share/ca-certificates/9876862.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-270400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh "sudo systemctl is-active docker": exit status 1 (320.471126ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh "sudo systemctl is-active crio": exit status 1 (275.404363ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1016360: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-270400 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [349f2938-9db9-4290-a3d8-0089bcd304d0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [349f2938-9db9-4290-a3d8-0089bcd304d0] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.006140108s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-270400 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.97.200 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-270400 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-270400 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-270400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-wkkd9" [1a036382-d54f-4bc7-82ff-12789af7d89f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-wkkd9" [1a036382-d54f-4bc7-82ff-12789af7d89f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.003331545s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service list -o json
functional_test.go:1490: Took "613.540537ms" to run "out/minikube-linux-arm64 -p functional-270400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31449
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "481.046515ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "120.450245ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "469.287586ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "82.511373ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31449
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdany-port759699553/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710200561049351165" to /tmp/TestFunctionalparallelMountCmdany-port759699553/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710200561049351165" to /tmp/TestFunctionalparallelMountCmdany-port759699553/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710200561049351165" to /tmp/TestFunctionalparallelMountCmdany-port759699553/001/test-1710200561049351165
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 11 23:42 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 11 23:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 11 23:42 test-1710200561049351165
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh cat /mount-9p/test-1710200561049351165
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-270400 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [94709214-2a68-4599-853b-e8f858656d0b] Pending
helpers_test.go:344: "busybox-mount" [94709214-2a68-4599-853b-e8f858656d0b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [94709214-2a68-4599-853b-e8f858656d0b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [94709214-2a68-4599-853b-e8f858656d0b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004377321s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-270400 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdany-port759699553/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdspecific-port1905340635/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (365.226416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdspecific-port1905340635/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh "sudo umount -f /mount-9p": exit status 1 (374.72503ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-270400 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdspecific-port1905340635/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T" /mount1: (1.055500598s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-270400 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-270400 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2258749861/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 version -o=json --components: (1.340996534s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-270400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-270400
docker.io/kindest/kindnetd:v20240202-8f1494ea
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-270400 image ls --format short --alsologtostderr:
I0311 23:43:10.097992 1021111 out.go:291] Setting OutFile to fd 1 ...
I0311 23:43:10.098363 1021111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.098398 1021111 out.go:304] Setting ErrFile to fd 2...
I0311 23:43:10.098421 1021111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.098717 1021111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
I0311 23:43:10.099552 1021111 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.099747 1021111 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.100318 1021111 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
I0311 23:43:10.118420 1021111 ssh_runner.go:195] Run: systemctl --version
I0311 23:43:10.118486 1021111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
I0311 23:43:10.139673 1021111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
I0311 23:43:10.232238 1021111 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-270400 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:9961cb | 30.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:3ca3ca | 22MB   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/library/nginx                     | latest             | sha256:760b7c | 67.2MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:05c284 | 17.1MB |
| docker.io/kindest/kindnetd                  | v20240202-8f1494ea | sha256:4740c1 | 25.3MB |
| docker.io/library/minikube-local-cache-test | functional-270400  | sha256:f9e372 | 1.01kB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:04b4c4 | 31.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:be5e6f | 17.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-270400 image ls --format table --alsologtostderr:
I0311 23:43:10.414667 1021170 out.go:291] Setting OutFile to fd 1 ...
I0311 23:43:10.414876 1021170 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.414890 1021170 out.go:304] Setting ErrFile to fd 2...
I0311 23:43:10.414896 1021170 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.415157 1021170 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
I0311 23:43:10.415880 1021170 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.416043 1021170 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.416623 1021170 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
I0311 23:43:10.447771 1021170 ssh_runner.go:195] Run: systemctl --version
I0311 23:43:10.447830 1021170 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
I0311 23:43:10.471576 1021170 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
I0311 23:43:10.565378 1021170 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-270400 image ls --format json --alsologtostderr:
[{"id":"sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"25336339"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676","repoDigests":["docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107"],"repoTags":["docker.io/library/nginx:latest"],"size":"67216905"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-min
ikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"30360149"},{"id":"sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"17082307"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64
d96"],"size":"25324029"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:f9e372de76ac21e608e620cd075798c030c369fd4f3578d2143d0f0d1dac3e37","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-270400"],"size":"1006"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:9cdd6470f48c8b
127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests":["docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"17601423"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["re
gistry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"31582354"},{"id":"sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":["registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"22001357"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-270400 image ls --format json --alsologtostderr:
I0311 23:43:10.380545 1021165 out.go:291] Setting OutFile to fd 1 ...
I0311 23:43:10.380802 1021165 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.380831 1021165 out.go:304] Setting ErrFile to fd 2...
I0311 23:43:10.380850 1021165 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.381115 1021165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
I0311 23:43:10.381754 1021165 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.381928 1021165 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.382456 1021165 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
I0311 23:43:10.413106 1021165 ssh_runner.go:195] Run: systemctl --version
I0311 23:43:10.413156 1021165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
I0311 23:43:10.441891 1021165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
I0311 23:43:10.531995 1021165 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-270400 image ls --format yaml --alsologtostderr:
- id: sha256:9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "30360149"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:f9e372de76ac21e608e620cd075798c030c369fd4f3578d2143d0f0d1dac3e37
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-270400
size: "1006"
- id: sha256:be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests:
- docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9
repoTags:
- docker.io/library/nginx:alpine
size: "17601423"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "22001357"
- id: sha256:05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "17082307"
- id: sha256:760b7cbba31e196288effd2af6924c42637ac5e0d67db4de6309f24518844676
repoDigests:
- docker.io/library/nginx@sha256:c26ae7472d624ba1fafd296e73cecc4f93f853088e6a9c13c0d52f6ca5865107
repoTags:
- docker.io/library/nginx:latest
size: "67216905"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:4740c1948d3fceb8d7dacc63033aa6299d80794ee4f4811539ec1081d9211f3d
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "25336339"
- id: sha256:04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "31582354"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-270400 image ls --format yaml --alsologtostderr:
I0311 23:43:10.105542 1021110 out.go:291] Setting OutFile to fd 1 ...
I0311 23:43:10.105749 1021110 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.105755 1021110 out.go:304] Setting ErrFile to fd 2...
I0311 23:43:10.105761 1021110 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.106097 1021110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
I0311 23:43:10.106779 1021110 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.106954 1021110 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.107497 1021110 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
I0311 23:43:10.131683 1021110 ssh_runner.go:195] Run: systemctl --version
I0311 23:43:10.131748 1021110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
I0311 23:43:10.158290 1021110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
I0311 23:43:10.262199 1021110 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-270400 ssh pgrep buildkitd: exit status 1 (296.606567ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image build -t localhost/my-image:functional-270400 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-270400 image build -t localhost/my-image:functional-270400 testdata/build --alsologtostderr: (2.130767695s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-270400 image build -t localhost/my-image:functional-270400 testdata/build --alsologtostderr:
I0311 23:43:10.953365 1021270 out.go:291] Setting OutFile to fd 1 ...
I0311 23:43:10.954061 1021270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.954077 1021270 out.go:304] Setting ErrFile to fd 2...
I0311 23:43:10.954084 1021270 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0311 23:43:10.954392 1021270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
I0311 23:43:10.955087 1021270 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.956992 1021270 config.go:182] Loaded profile config "functional-270400": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0311 23:43:10.957562 1021270 cli_runner.go:164] Run: docker container inspect functional-270400 --format={{.State.Status}}
I0311 23:43:10.973983 1021270 ssh_runner.go:195] Run: systemctl --version
I0311 23:43:10.974033 1021270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-270400
I0311 23:43:10.990323 1021270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33917 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/functional-270400/id_rsa Username:docker}
I0311 23:43:11.079987 1021270 build_images.go:161] Building image from path: /tmp/build.769480223.tar
I0311 23:43:11.080118 1021270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0311 23:43:11.089284 1021270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.769480223.tar
I0311 23:43:11.092877 1021270 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.769480223.tar: stat -c "%s %y" /var/lib/minikube/build/build.769480223.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.769480223.tar': No such file or directory
I0311 23:43:11.092908 1021270 ssh_runner.go:362] scp /tmp/build.769480223.tar --> /var/lib/minikube/build/build.769480223.tar (3072 bytes)
I0311 23:43:11.118618 1021270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.769480223
I0311 23:43:11.127787 1021270 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.769480223 -xf /var/lib/minikube/build/build.769480223.tar
I0311 23:43:11.137051 1021270 containerd.go:379] Building image: /var/lib/minikube/build/build.769480223
I0311 23:43:11.137195 1021270 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.769480223 --local dockerfile=/var/lib/minikube/build/build.769480223 --output type=image,name=localhost/my-image:functional-270400
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:59ae0ff99e592928515d04ebc41e9832a39a5bcc721e76332dcfc144fc829e9a 0.0s done
#8 exporting config sha256:328891f240065c764eca880efe3f31636fad5c4545676aef7ff972a5a267fced 0.0s done
#8 naming to localhost/my-image:functional-270400 done
#8 DONE 0.1s
I0311 23:43:12.990220 1021270 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.769480223 --local dockerfile=/var/lib/minikube/build/build.769480223 --output type=image,name=localhost/my-image:functional-270400: (1.852970502s)
I0311 23:43:12.990302 1021270 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.769480223
I0311 23:43:13.001653 1021270 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.769480223.tar
I0311 23:43:13.013332 1021270 build_images.go:217] Built localhost/my-image:functional-270400 from /tmp/build.769480223.tar
I0311 23:43:13.013415 1021270 build_images.go:133] succeeded building to: functional-270400
I0311 23:43:13.013435 1021270 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.587115036s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-270400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image rm gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-270400
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-270400 image save --daemon gcr.io/google-containers/addon-resizer:functional-270400 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-270400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-270400
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-270400
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-270400
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (130.68s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-847039 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0311 23:43:25.196020  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-847039 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m9.711500888s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (130.68s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (20.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- rollout status deployment/busybox
E0311 23:45:41.352341  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-847039 -- rollout status deployment/busybox: (17.054504945s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-4c6wl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-b6fzl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-f9ztc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-4c6wl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-b6fzl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-f9ztc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-4c6wl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-b6fzl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-f9ztc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (20.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-4c6wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-4c6wl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-b6fzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-b6fzl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-f9ztc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-847039 -- exec busybox-5b5d89c9d6-f9ztc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (26.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-847039 -v=7 --alsologtostderr
E0311 23:46:09.036196  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-847039 -v=7 --alsologtostderr: (25.104849369s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr: (1.00703183s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (26.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-847039 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (19.5s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp testdata/cp-test.txt ha-847039:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3240924936/001/cp-test_ha-847039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039:/home/docker/cp-test.txt ha-847039-m02:/home/docker/cp-test_ha-847039_ha-847039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test_ha-847039_ha-847039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039:/home/docker/cp-test.txt ha-847039-m03:/home/docker/cp-test_ha-847039_ha-847039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test_ha-847039_ha-847039-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039:/home/docker/cp-test.txt ha-847039-m04:/home/docker/cp-test_ha-847039_ha-847039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test_ha-847039_ha-847039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp testdata/cp-test.txt ha-847039-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3240924936/001/cp-test_ha-847039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m02:/home/docker/cp-test.txt ha-847039:/home/docker/cp-test_ha-847039-m02_ha-847039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test_ha-847039-m02_ha-847039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m02:/home/docker/cp-test.txt ha-847039-m03:/home/docker/cp-test_ha-847039-m02_ha-847039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test_ha-847039-m02_ha-847039-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m02:/home/docker/cp-test.txt ha-847039-m04:/home/docker/cp-test_ha-847039-m02_ha-847039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test_ha-847039-m02_ha-847039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp testdata/cp-test.txt ha-847039-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3240924936/001/cp-test_ha-847039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m03:/home/docker/cp-test.txt ha-847039:/home/docker/cp-test_ha-847039-m03_ha-847039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test_ha-847039-m03_ha-847039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m03:/home/docker/cp-test.txt ha-847039-m02:/home/docker/cp-test_ha-847039-m03_ha-847039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test_ha-847039-m03_ha-847039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m03:/home/docker/cp-test.txt ha-847039-m04:/home/docker/cp-test_ha-847039-m03_ha-847039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test_ha-847039-m03_ha-847039-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp testdata/cp-test.txt ha-847039-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3240924936/001/cp-test_ha-847039-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m04:/home/docker/cp-test.txt ha-847039:/home/docker/cp-test_ha-847039-m04_ha-847039.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039 "sudo cat /home/docker/cp-test_ha-847039-m04_ha-847039.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m04:/home/docker/cp-test.txt ha-847039-m02:/home/docker/cp-test_ha-847039-m04_ha-847039-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m02 "sudo cat /home/docker/cp-test_ha-847039-m04_ha-847039-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 cp ha-847039-m04:/home/docker/cp-test.txt ha-847039-m03:/home/docker/cp-test_ha-847039-m04_ha-847039-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 ssh -n ha-847039-m03 "sudo cat /home/docker/cp-test_ha-847039-m04_ha-847039-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (19.50s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 node stop m02 -v=7 --alsologtostderr: (12.1743523s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr: exit status 7 (719.755536ms)

                                                
                                                
-- stdout --
	ha-847039
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-847039-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847039-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-847039-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 23:46:47.700674 1036545 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:46:47.700884 1036545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:46:47.700896 1036545 out.go:304] Setting ErrFile to fd 2...
	I0311 23:46:47.700902 1036545 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:46:47.701135 1036545 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:46:47.701322 1036545 out.go:298] Setting JSON to false
	I0311 23:46:47.701356 1036545 mustload.go:65] Loading cluster: ha-847039
	I0311 23:46:47.701482 1036545 notify.go:220] Checking for updates...
	I0311 23:46:47.701759 1036545 config.go:182] Loaded profile config "ha-847039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:46:47.701771 1036545 status.go:255] checking status of ha-847039 ...
	I0311 23:46:47.702274 1036545 cli_runner.go:164] Run: docker container inspect ha-847039 --format={{.State.Status}}
	I0311 23:46:47.723514 1036545 status.go:330] ha-847039 host status = "Running" (err=<nil>)
	I0311 23:46:47.723542 1036545 host.go:66] Checking if "ha-847039" exists ...
	I0311 23:46:47.723844 1036545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847039
	I0311 23:46:47.741030 1036545 host.go:66] Checking if "ha-847039" exists ...
	I0311 23:46:47.741365 1036545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:46:47.741421 1036545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847039
	I0311 23:46:47.760936 1036545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33922 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/ha-847039/id_rsa Username:docker}
	I0311 23:46:47.853295 1036545 ssh_runner.go:195] Run: systemctl --version
	I0311 23:46:47.857759 1036545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:46:47.871630 1036545 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:46:47.927731 1036545 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-11 23:46:47.918027131 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:46:47.928329 1036545 kubeconfig.go:125] found "ha-847039" server: "https://192.168.49.254:8443"
	I0311 23:46:47.928355 1036545 api_server.go:166] Checking apiserver status ...
	I0311 23:46:47.928399 1036545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 23:46:47.940154 1036545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	I0311 23:46:47.949740 1036545 api_server.go:182] apiserver freezer: "5:freezer:/docker/52016678c54c78230a77756ba31b853075a3c3ef6086e6e6ad361c21c3c50e12/kubepods/burstable/pod32144dc7d5e060dc308961648b473a4e/2885849af84e084817607678bf5fcb863cf550b33ac42a89fa3eb8d22e7839af"
	I0311 23:46:47.949832 1036545 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/52016678c54c78230a77756ba31b853075a3c3ef6086e6e6ad361c21c3c50e12/kubepods/burstable/pod32144dc7d5e060dc308961648b473a4e/2885849af84e084817607678bf5fcb863cf550b33ac42a89fa3eb8d22e7839af/freezer.state
	I0311 23:46:47.958805 1036545 api_server.go:204] freezer state: "THAWED"
	I0311 23:46:47.958837 1036545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 23:46:47.968245 1036545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 23:46:47.968272 1036545 status.go:422] ha-847039 apiserver status = Running (err=<nil>)
	I0311 23:46:47.968284 1036545 status.go:257] ha-847039 status: &{Name:ha-847039 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:46:47.968301 1036545 status.go:255] checking status of ha-847039-m02 ...
	I0311 23:46:47.968632 1036545 cli_runner.go:164] Run: docker container inspect ha-847039-m02 --format={{.State.Status}}
	I0311 23:46:47.984996 1036545 status.go:330] ha-847039-m02 host status = "Stopped" (err=<nil>)
	I0311 23:46:47.985041 1036545 status.go:343] host is not running, skipping remaining checks
	I0311 23:46:47.985048 1036545 status.go:257] ha-847039-m02 status: &{Name:ha-847039-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:46:47.985069 1036545 status.go:255] checking status of ha-847039-m03 ...
	I0311 23:46:47.985384 1036545 cli_runner.go:164] Run: docker container inspect ha-847039-m03 --format={{.State.Status}}
	I0311 23:46:48.007702 1036545 status.go:330] ha-847039-m03 host status = "Running" (err=<nil>)
	I0311 23:46:48.007728 1036545 host.go:66] Checking if "ha-847039-m03" exists ...
	I0311 23:46:48.008077 1036545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847039-m03
	I0311 23:46:48.031175 1036545 host.go:66] Checking if "ha-847039-m03" exists ...
	I0311 23:46:48.031682 1036545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:46:48.031766 1036545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847039-m03
	I0311 23:46:48.049889 1036545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33932 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/ha-847039-m03/id_rsa Username:docker}
	I0311 23:46:48.140676 1036545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:46:48.152957 1036545 kubeconfig.go:125] found "ha-847039" server: "https://192.168.49.254:8443"
	I0311 23:46:48.152986 1036545 api_server.go:166] Checking apiserver status ...
	I0311 23:46:48.153030 1036545 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 23:46:48.164716 1036545 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1306/cgroup
	I0311 23:46:48.174118 1036545 api_server.go:182] apiserver freezer: "5:freezer:/docker/9b7ce235c5d0e275d8efc5ba370cd4e89b2acdd963e86d488503991e0df84478/kubepods/burstable/pod29a7ef726e55ba6d4bf5e9d76f6fe461/10e692d6c9dedd4a7250c920d6ecfdbd49ed5b3a5ffc3333ed1c887e02d8d88e"
	I0311 23:46:48.174195 1036545 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9b7ce235c5d0e275d8efc5ba370cd4e89b2acdd963e86d488503991e0df84478/kubepods/burstable/pod29a7ef726e55ba6d4bf5e9d76f6fe461/10e692d6c9dedd4a7250c920d6ecfdbd49ed5b3a5ffc3333ed1c887e02d8d88e/freezer.state
	I0311 23:46:48.183122 1036545 api_server.go:204] freezer state: "THAWED"
	I0311 23:46:48.183150 1036545 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0311 23:46:48.191945 1036545 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0311 23:46:48.191980 1036545 status.go:422] ha-847039-m03 apiserver status = Running (err=<nil>)
	I0311 23:46:48.191991 1036545 status.go:257] ha-847039-m03 status: &{Name:ha-847039-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:46:48.192045 1036545 status.go:255] checking status of ha-847039-m04 ...
	I0311 23:46:48.192455 1036545 cli_runner.go:164] Run: docker container inspect ha-847039-m04 --format={{.State.Status}}
	I0311 23:46:48.208543 1036545 status.go:330] ha-847039-m04 host status = "Running" (err=<nil>)
	I0311 23:46:48.208569 1036545 host.go:66] Checking if "ha-847039-m04" exists ...
	I0311 23:46:48.208882 1036545 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-847039-m04
	I0311 23:46:48.234204 1036545 host.go:66] Checking if "ha-847039-m04" exists ...
	I0311 23:46:48.234546 1036545 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:46:48.234591 1036545 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-847039-m04
	I0311 23:46:48.253573 1036545 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33937 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/ha-847039-m04/id_rsa Username:docker}
	I0311 23:46:48.344537 1036545 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:46:48.356464 1036545 status.go:257] ha-847039-m04 status: &{Name:ha-847039-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (18.83s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 node start m02 -v=7 --alsologtostderr: (17.487632684s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr: (1.248148859s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (18.83s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (111.88s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-847039 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-847039 -v=7 --alsologtostderr
E0311 23:47:11.611453  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.617077  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.627744  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.648539  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.689166  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.769406  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:11.929869  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:12.250420  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:12.891337  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:14.171644  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:16.732177  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:21.852541  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:47:32.093601  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-847039 -v=7 --alsologtostderr: (26.285098899s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-847039 --wait=true -v=7 --alsologtostderr
E0311 23:47:52.574728  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:48:33.535427  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-847039 --wait=true -v=7 --alsologtostderr: (1m25.398325609s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-847039
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (111.88s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (11.45s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 node delete m03 -v=7 --alsologtostderr: (10.465200262s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (11.45s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (35.95s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 stop -v=7 --alsologtostderr: (35.836962164s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr: exit status 7 (115.743902ms)

                                                
                                                
-- stdout --
	ha-847039
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847039-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-847039-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 23:49:48.409627 1049830 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:49:48.409863 1049830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:49:48.409890 1049830 out.go:304] Setting ErrFile to fd 2...
	I0311 23:49:48.409912 1049830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:49:48.410183 1049830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:49:48.410410 1049830 out.go:298] Setting JSON to false
	I0311 23:49:48.410470 1049830 mustload.go:65] Loading cluster: ha-847039
	I0311 23:49:48.410516 1049830 notify.go:220] Checking for updates...
	I0311 23:49:48.410934 1049830 config.go:182] Loaded profile config "ha-847039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:49:48.410981 1049830 status.go:255] checking status of ha-847039 ...
	I0311 23:49:48.411594 1049830 cli_runner.go:164] Run: docker container inspect ha-847039 --format={{.State.Status}}
	I0311 23:49:48.427267 1049830 status.go:330] ha-847039 host status = "Stopped" (err=<nil>)
	I0311 23:49:48.427287 1049830 status.go:343] host is not running, skipping remaining checks
	I0311 23:49:48.427295 1049830 status.go:257] ha-847039 status: &{Name:ha-847039 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:49:48.427358 1049830 status.go:255] checking status of ha-847039-m02 ...
	I0311 23:49:48.427655 1049830 cli_runner.go:164] Run: docker container inspect ha-847039-m02 --format={{.State.Status}}
	I0311 23:49:48.446473 1049830 status.go:330] ha-847039-m02 host status = "Stopped" (err=<nil>)
	I0311 23:49:48.446494 1049830 status.go:343] host is not running, skipping remaining checks
	I0311 23:49:48.446501 1049830 status.go:257] ha-847039-m02 status: &{Name:ha-847039-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:49:48.446522 1049830 status.go:255] checking status of ha-847039-m04 ...
	I0311 23:49:48.446834 1049830 cli_runner.go:164] Run: docker container inspect ha-847039-m04 --format={{.State.Status}}
	I0311 23:49:48.464740 1049830 status.go:330] ha-847039-m04 host status = "Stopped" (err=<nil>)
	I0311 23:49:48.464760 1049830 status.go:343] host is not running, skipping remaining checks
	I0311 23:49:48.464767 1049830 status.go:257] ha-847039-m04 status: &{Name:ha-847039-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (35.95s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (76.68s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-847039 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0311 23:49:55.456417  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:50:41.351185  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-847039 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.74033404s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (76.68s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (39.75s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-847039 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-847039 --control-plane -v=7 --alsologtostderr: (38.715607802s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-847039 status -v=7 --alsologtostderr: (1.037923373s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (39.75s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-679346 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0311 23:52:11.612060  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
E0311 23:52:39.297159  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-679346 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (58.354208371s)
--- PASS: TestJSONOutput/start/Command (58.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-679346 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-679346 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-679346 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-679346 --output=json --user=testUser: (5.769799427s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-384543 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-384543 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (93.455177ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6639e33d-1d39-4ff4-995b-e0d52fd41979","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-384543] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ca92802-de87-449d-adc8-8a093cd26ac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"a669ac98-469d-4fab-bc60-1ee3de38516c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"64af5f99-15f4-47c5-89df-72c7c20bee97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig"}}
	{"specversion":"1.0","id":"64f28563-8230-457d-8512-42565e67b0ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube"}}
	{"specversion":"1.0","id":"b8e559fa-910d-4a09-9269-4c8aa6b75f65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"54f2c239-56cf-48b3-a755-7e28ec8e4148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f766ba69-864e-414a-af4d-515610333b95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-384543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-384543
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-324042 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-324042 --network=: (36.474612723s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-324042" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-324042
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-324042: (2.10894515s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-115678 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-115678 --network=bridge: (32.859567606s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-115678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-115678
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-115678: (2.012653538s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.90s)

                                                
                                    
x
+
TestKicExistingNetwork (34.53s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-130125 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-130125 --network=existing-network: (32.730352179s)
helpers_test.go:175: Cleaning up "existing-network-130125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-130125
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-130125: (1.64499771s)
--- PASS: TestKicExistingNetwork (34.53s)

                                                
                                    
x
+
TestKicCustomSubnet (31.96s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-615115 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-615115 --subnet=192.168.60.0/24: (29.808126396s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-615115 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-615115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-615115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-615115: (2.122926025s)
--- PASS: TestKicCustomSubnet (31.96s)

                                                
                                    
x
+
TestKicStaticIP (37.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-785894 --static-ip=192.168.200.200
E0311 23:55:41.350879  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-785894 --static-ip=192.168.200.200: (35.313355771s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-785894 ip
helpers_test.go:175: Cleaning up "static-ip-785894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-785894
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-785894: (2.059550624s)
--- PASS: TestKicStaticIP (37.56s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (70.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-807608 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-807608 --driver=docker  --container-runtime=containerd: (31.902816128s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-810308 --driver=docker  --container-runtime=containerd
E0311 23:57:04.396589  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-810308 --driver=docker  --container-runtime=containerd: (33.466950652s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-807608
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-810308
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-810308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-810308
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-810308: (1.956988484s)
helpers_test.go:175: Cleaning up "first-807608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-807608
E0311 23:57:11.611109  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-807608: (2.221589816s)
--- PASS: TestMinikubeProfile (70.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-816594 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-816594 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.047527504s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-816594 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-829878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-829878 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.727393255s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-829878 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-816594 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-816594 --alsologtostderr -v=5: (1.615952664s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-829878 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-829878
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-829878: (1.243930717s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-829878
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-829878: (6.361276271s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-829878 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-603445 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-603445 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m19.460867813s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-603445 -- rollout status deployment/busybox: (2.821792345s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-q6j7g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-rl9hk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-q6j7g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-rl9hk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-q6j7g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-rl9hk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-q6j7g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-q6j7g -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-rl9hk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-603445 -- exec busybox-5b5d89c9d6-rl9hk -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-603445 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-603445 -v 3 --alsologtostderr: (18.069885478s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-603445 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp testdata/cp-test.txt multinode-603445:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile515905802/001/cp-test_multinode-603445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445:/home/docker/cp-test.txt multinode-603445-m02:/home/docker/cp-test_multinode-603445_multinode-603445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test_multinode-603445_multinode-603445-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445:/home/docker/cp-test.txt multinode-603445-m03:/home/docker/cp-test_multinode-603445_multinode-603445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test_multinode-603445_multinode-603445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp testdata/cp-test.txt multinode-603445-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile515905802/001/cp-test_multinode-603445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m02:/home/docker/cp-test.txt multinode-603445:/home/docker/cp-test_multinode-603445-m02_multinode-603445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test_multinode-603445-m02_multinode-603445.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m02:/home/docker/cp-test.txt multinode-603445-m03:/home/docker/cp-test_multinode-603445-m02_multinode-603445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test_multinode-603445-m02_multinode-603445-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp testdata/cp-test.txt multinode-603445-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile515905802/001/cp-test_multinode-603445-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m03:/home/docker/cp-test.txt multinode-603445:/home/docker/cp-test_multinode-603445-m03_multinode-603445.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445 "sudo cat /home/docker/cp-test_multinode-603445-m03_multinode-603445.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 cp multinode-603445-m03:/home/docker/cp-test.txt multinode-603445-m02:/home/docker/cp-test_multinode-603445-m03_multinode-603445-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 ssh -n multinode-603445-m02 "sudo cat /home/docker/cp-test_multinode-603445-m03_multinode-603445-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-603445 node stop m03: (1.226349971s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-603445 status: exit status 7 (557.043168ms)

                                                
                                                
-- stdout --
	multinode-603445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr: exit status 7 (527.936922ms)

                                                
                                                
-- stdout --
	multinode-603445
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603445-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603445-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0311 23:59:36.967544 1101445 out.go:291] Setting OutFile to fd 1 ...
	I0311 23:59:36.967794 1101445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:59:36.967824 1101445 out.go:304] Setting ErrFile to fd 2...
	I0311 23:59:36.967843 1101445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0311 23:59:36.968089 1101445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0311 23:59:36.968303 1101445 out.go:298] Setting JSON to false
	I0311 23:59:36.968361 1101445 mustload.go:65] Loading cluster: multinode-603445
	I0311 23:59:36.968391 1101445 notify.go:220] Checking for updates...
	I0311 23:59:36.968849 1101445 config.go:182] Loaded profile config "multinode-603445": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0311 23:59:36.968889 1101445 status.go:255] checking status of multinode-603445 ...
	I0311 23:59:36.969843 1101445 cli_runner.go:164] Run: docker container inspect multinode-603445 --format={{.State.Status}}
	I0311 23:59:36.986921 1101445 status.go:330] multinode-603445 host status = "Running" (err=<nil>)
	I0311 23:59:36.986947 1101445 host.go:66] Checking if "multinode-603445" exists ...
	I0311 23:59:36.987385 1101445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-603445
	I0311 23:59:37.006700 1101445 host.go:66] Checking if "multinode-603445" exists ...
	I0311 23:59:37.007099 1101445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:59:37.007156 1101445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-603445
	I0311 23:59:37.040338 1101445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34042 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/multinode-603445/id_rsa Username:docker}
	I0311 23:59:37.132690 1101445 ssh_runner.go:195] Run: systemctl --version
	I0311 23:59:37.137319 1101445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:59:37.149075 1101445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0311 23:59:37.212330 1101445 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-11 23:59:37.202357254 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0311 23:59:37.212933 1101445 kubeconfig.go:125] found "multinode-603445" server: "https://192.168.58.2:8443"
	I0311 23:59:37.212960 1101445 api_server.go:166] Checking apiserver status ...
	I0311 23:59:37.213012 1101445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0311 23:59:37.224151 1101445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0311 23:59:37.233366 1101445 api_server.go:182] apiserver freezer: "5:freezer:/docker/d1b6457ffeec22ddf96f3b6ddf63c74a44e200820db3578644c97c64e4a007c2/kubepods/burstable/podaffca7ef9cbab3709a52eeab80dc1c37/e3a1205d88cf34999b311ea2a562d66cde669e6d35fba27efe3fe7c4242a3883"
	I0311 23:59:37.233444 1101445 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d1b6457ffeec22ddf96f3b6ddf63c74a44e200820db3578644c97c64e4a007c2/kubepods/burstable/podaffca7ef9cbab3709a52eeab80dc1c37/e3a1205d88cf34999b311ea2a562d66cde669e6d35fba27efe3fe7c4242a3883/freezer.state
	I0311 23:59:37.242166 1101445 api_server.go:204] freezer state: "THAWED"
	I0311 23:59:37.242195 1101445 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0311 23:59:37.250484 1101445 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0311 23:59:37.250510 1101445 status.go:422] multinode-603445 apiserver status = Running (err=<nil>)
	I0311 23:59:37.250523 1101445 status.go:257] multinode-603445 status: &{Name:multinode-603445 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:59:37.250555 1101445 status.go:255] checking status of multinode-603445-m02 ...
	I0311 23:59:37.250863 1101445 cli_runner.go:164] Run: docker container inspect multinode-603445-m02 --format={{.State.Status}}
	I0311 23:59:37.266293 1101445 status.go:330] multinode-603445-m02 host status = "Running" (err=<nil>)
	I0311 23:59:37.266317 1101445 host.go:66] Checking if "multinode-603445-m02" exists ...
	I0311 23:59:37.266619 1101445 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-603445-m02
	I0311 23:59:37.283519 1101445 host.go:66] Checking if "multinode-603445-m02" exists ...
	I0311 23:59:37.283891 1101445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0311 23:59:37.283939 1101445 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-603445-m02
	I0311 23:59:37.300748 1101445 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34047 SSHKeyPath:/home/jenkins/minikube-integration/18358-982285/.minikube/machines/multinode-603445-m02/id_rsa Username:docker}
	I0311 23:59:37.392262 1101445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0311 23:59:37.403820 1101445 status.go:257] multinode-603445-m02 status: &{Name:multinode-603445-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0311 23:59:37.403856 1101445 status.go:255] checking status of multinode-603445-m03 ...
	I0311 23:59:37.404167 1101445 cli_runner.go:164] Run: docker container inspect multinode-603445-m03 --format={{.State.Status}}
	I0311 23:59:37.420890 1101445 status.go:330] multinode-603445-m03 host status = "Stopped" (err=<nil>)
	I0311 23:59:37.420915 1101445 status.go:343] host is not running, skipping remaining checks
	I0311 23:59:37.420922 1101445 status.go:257] multinode-603445-m03 status: &{Name:multinode-603445-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-603445 node start m03 -v=7 --alsologtostderr: (8.597037073s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-603445
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-603445
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-603445: (25.888204455s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-603445 --wait=true -v=8 --alsologtostderr
E0312 00:00:41.351501  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-603445 --wait=true -v=8 --alsologtostderr: (1m3.411722136s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-603445
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-603445 node delete m03: (4.773482982s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-603445 stop: (23.844440854s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-603445 status: exit status 7 (99.472367ms)

                                                
                                                
-- stdout --
	multinode-603445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr: exit status 7 (97.265138ms)

                                                
                                                
-- stdout --
	multinode-603445
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603445-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0312 00:01:45.683079 1109026 out.go:291] Setting OutFile to fd 1 ...
	I0312 00:01:45.683297 1109026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:01:45.683359 1109026 out.go:304] Setting ErrFile to fd 2...
	I0312 00:01:45.683379 1109026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:01:45.683652 1109026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0312 00:01:45.683867 1109026 out.go:298] Setting JSON to false
	I0312 00:01:45.683928 1109026 mustload.go:65] Loading cluster: multinode-603445
	I0312 00:01:45.684025 1109026 notify.go:220] Checking for updates...
	I0312 00:01:45.684379 1109026 config.go:182] Loaded profile config "multinode-603445": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0312 00:01:45.684416 1109026 status.go:255] checking status of multinode-603445 ...
	I0312 00:01:45.685416 1109026 cli_runner.go:164] Run: docker container inspect multinode-603445 --format={{.State.Status}}
	I0312 00:01:45.701849 1109026 status.go:330] multinode-603445 host status = "Stopped" (err=<nil>)
	I0312 00:01:45.701884 1109026 status.go:343] host is not running, skipping remaining checks
	I0312 00:01:45.701891 1109026 status.go:257] multinode-603445 status: &{Name:multinode-603445 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0312 00:01:45.701917 1109026 status.go:255] checking status of multinode-603445-m02 ...
	I0312 00:01:45.702209 1109026 cli_runner.go:164] Run: docker container inspect multinode-603445-m02 --format={{.State.Status}}
	I0312 00:01:45.717853 1109026 status.go:330] multinode-603445-m02 host status = "Stopped" (err=<nil>)
	I0312 00:01:45.717872 1109026 status.go:343] host is not running, skipping remaining checks
	I0312 00:01:45.717879 1109026 status.go:257] multinode-603445-m02 status: &{Name:multinode-603445-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-603445 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0312 00:02:11.611451  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-603445 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.847075427s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-603445 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-603445
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-603445-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-603445-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.734526ms)

                                                
                                                
-- stdout --
	* [multinode-603445-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-603445-m02' is duplicated with machine name 'multinode-603445-m02' in profile 'multinode-603445'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-603445-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-603445-m03 --driver=docker  --container-runtime=containerd: (33.839061459s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-603445
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-603445: exit status 80 (483.35441ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-603445 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-603445-m03 already exists in multinode-603445-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-603445-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-603445-m03: (2.175360683s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.67s)

                                                
                                    
x
+
TestPreload (119.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0312 00:03:34.657376  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498346 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.486779233s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498346 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-498346 image pull gcr.io/k8s-minikube/busybox: (1.291540353s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-498346
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-498346: (12.075316125s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-498346 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-498346 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (30.762957673s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-498346 image list
helpers_test.go:175: Cleaning up "test-preload-498346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-498346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-498346: (2.495851962s)
--- PASS: TestPreload (119.35s)

                                                
                                    
x
+
TestScheduledStopUnix (106.12s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-964980 --memory=2048 --driver=docker  --container-runtime=containerd
E0312 00:05:41.350913  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-964980 --memory=2048 --driver=docker  --container-runtime=containerd: (29.718348925s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964980 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-964980 -n scheduled-stop-964980
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964980 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964980 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964980 -n scheduled-stop-964980
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964980
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-964980 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-964980
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-964980: exit status 7 (73.761129ms)

                                                
                                                
-- stdout --
	scheduled-stop-964980
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964980 -n scheduled-stop-964980
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-964980 -n scheduled-stop-964980: exit status 7 (83.048086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-964980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-964980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-964980: (4.71602832s)
--- PASS: TestScheduledStopUnix (106.12s)

                                                
                                    
x
+
TestInsufficientStorage (10.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-437008 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0312 00:07:11.611210  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-437008 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.004192353s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9b7a57c0-63b0-464b-b2e6-cd30f72f358b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-437008] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"73e5b13e-03b0-4105-8bad-540c6083c1ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18358"}}
	{"specversion":"1.0","id":"726abd18-6702-4249-88c0-26daaa8f3a0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4eca18f7-4867-42fa-9076-d5a339ac4550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig"}}
	{"specversion":"1.0","id":"02d7aecd-d20f-49e8-89f2-2afff70b7a81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube"}}
	{"specversion":"1.0","id":"83f55be1-9952-4735-a366-3eb3c42dc720","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"933b039a-f097-4186-a1f8-2486286fd4f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91928495-c51a-4ea8-9b1b-50bacdb24ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6f80b41f-8832-47e2-9128-b0024bbd58de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"abfe92db-d9b3-47d9-bbaa-95c40c6e4297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e965a2f-f4d5-43a6-9c8e-f4b4e254e296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"01eade8f-6480-4b41-aa3c-4666c1ca46a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-437008\" primary control-plane node in \"insufficient-storage-437008\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4a2a7f4-f2f2-4b48-9706-a8517b5fcac3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1708944392-18244 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"513753cf-158e-4e3d-9e1e-a1be05400e12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8586dd81-7055-4a1d-8cbb-3020814e4a2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-437008 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-437008 --output=json --layout=cluster: exit status 7 (288.621591ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-437008","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-437008","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0312 00:07:15.856702 1126952 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-437008" does not appear in /home/jenkins/minikube-integration/18358-982285/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-437008 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-437008 --output=json --layout=cluster: exit status 7 (293.735473ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-437008","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-437008","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0312 00:07:16.151654 1127004 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-437008" does not appear in /home/jenkins/minikube-integration/18358-982285/kubeconfig
	E0312 00:07:16.162240 1127004 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/insufficient-storage-437008/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-437008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-437008
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-437008: (1.908203447s)
--- PASS: TestInsufficientStorage (10.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.96s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2566812989 start -p running-upgrade-287300 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2566812989 start -p running-upgrade-287300 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (36.535528988s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-287300 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0312 00:13:44.396777  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-287300 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.081248723s)
helpers_test.go:175: Cleaning up "running-upgrade-287300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-287300
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-287300: (3.06421522s)
--- PASS: TestRunningBinaryUpgrade (80.96s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.099867853s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-422392
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-422392: (1.595809595s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-422392 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-422392 status --format={{.Host}}: exit status 7 (120.008882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5m5.260474754s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-422392 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.801839ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-422392] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-422392
	    minikube start -p kubernetes-upgrade-422392 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4223922 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-422392 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-422392 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.226571304s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-422392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-422392
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-422392: (3.05770937s)
--- PASS: TestKubernetesUpgrade (393.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.7s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1043782259 start -p missing-upgrade-151441 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1043782259 start -p missing-upgrade-151441 --memory=2200 --driver=docker  --container-runtime=containerd: (1m13.367881346s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-151441
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-151441
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-151441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0312 00:10:41.357476  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-151441 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.832486547s)
helpers_test.go:175: Cleaning up "missing-upgrade-151441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-151441
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-151441: (2.645646052s)
--- PASS: TestMissingContainerUpgrade (145.70s)

                                                
                                    
x
+
TestPause/serial/Start (68.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-472528 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-472528 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m8.852928163s)
--- PASS: TestPause/serial/Start (68.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (111.109571ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-925554] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925554 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925554 --driver=docker  --container-runtime=containerd: (42.070259095s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-925554 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.206699725s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-925554 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-925554 status -o json: exit status 2 (330.326567ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-925554","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-925554
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-925554: (1.938383233s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925554 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.377564848s)
--- PASS: TestNoKubernetes/serial/Start (5.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-925554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-925554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.478661ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-925554
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-925554: (1.226791423s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-925554 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-925554 --driver=docker  --container-runtime=containerd: (7.884124429s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.88s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-472528 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-472528 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.33914482s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-925554 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-925554 "sudo systemctl is-active --quiet service kubelet": exit status 1 (372.700634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestPause/serial/Pause (1.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-472528 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-472528 --alsologtostderr -v=5: (1.025666004s)
--- PASS: TestPause/serial/Pause (1.03s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-472528 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-472528 --output=json --layout=cluster: exit status 2 (322.683899ms)

                                                
                                                
-- stdout --
	{"Name":"pause-472528","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-472528","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-472528 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.05s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-472528 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-472528 --alsologtostderr -v=5: (1.053917215s)
--- PASS: TestPause/serial/PauseAgain (1.05s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.83s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-472528 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-472528 --alsologtostderr -v=5: (2.832991422s)
--- PASS: TestPause/serial/DeletePaused (2.83s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-472528
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-472528: exit status 1 (37.928649ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-472528: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (110.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.497817404 start -p stopped-upgrade-558242 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.497817404 start -p stopped-upgrade-558242 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.304868335s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.497817404 -p stopped-upgrade-558242 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.497817404 -p stopped-upgrade-558242 stop: (19.928144344s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-558242 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0312 00:12:11.611136  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-558242 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.832301968s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (110.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-558242
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-558242: (1.234149724s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-399839 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-399839 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (314.874512ms)

                                                
                                                
-- stdout --
	* [false-399839] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18358
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0312 00:15:06.462112 1167226 out.go:291] Setting OutFile to fd 1 ...
	I0312 00:15:06.462242 1167226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:15:06.462295 1167226 out.go:304] Setting ErrFile to fd 2...
	I0312 00:15:06.462301 1167226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0312 00:15:06.462557 1167226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18358-982285/.minikube/bin
	I0312 00:15:06.462962 1167226 out.go:298] Setting JSON to false
	I0312 00:15:06.465364 1167226 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":17854,"bootTime":1710184652,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0312 00:15:06.465463 1167226 start.go:139] virtualization:  
	I0312 00:15:06.468377 1167226 out.go:177] * [false-399839] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0312 00:15:06.471030 1167226 out.go:177]   - MINIKUBE_LOCATION=18358
	I0312 00:15:06.472948 1167226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0312 00:15:06.471210 1167226 notify.go:220] Checking for updates...
	I0312 00:15:06.477409 1167226 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18358-982285/kubeconfig
	I0312 00:15:06.479197 1167226 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18358-982285/.minikube
	I0312 00:15:06.481299 1167226 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0312 00:15:06.483365 1167226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0312 00:15:06.486123 1167226 config.go:182] Loaded profile config "kubernetes-upgrade-422392": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0312 00:15:06.486235 1167226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0312 00:15:06.513765 1167226 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0312 00:15:06.513941 1167226 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0312 00:15:06.645832 1167226 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:64 SystemTime:2024-03-12 00:15:06.634691172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0312 00:15:06.645959 1167226 docker.go:295] overlay module found
	I0312 00:15:06.648685 1167226 out.go:177] * Using the docker driver based on user configuration
	I0312 00:15:06.651473 1167226 start.go:297] selected driver: docker
	I0312 00:15:06.651491 1167226 start.go:901] validating driver "docker" against <nil>
	I0312 00:15:06.651504 1167226 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0312 00:15:06.658303 1167226 out.go:177] 
	W0312 00:15:06.660602 1167226 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0312 00:15:06.662763 1167226 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-399839 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Mar 2024 00:15:07 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-422392
contexts:
- context:
cluster: kubernetes-upgrade-422392
extensions:
- extension:
last-update: Tue, 12 Mar 2024 00:15:07 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-422392
name: kubernetes-upgrade-422392
current-context: kubernetes-upgrade-422392
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-422392
user:
client-certificate: /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kubernetes-upgrade-422392/client.crt
client-key: /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kubernetes-upgrade-422392/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-399839

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-399839"

                                                
                                                
----------------------- debugLogs end: false-399839 [took: 6.331177439s] --------------------------------
helpers_test.go:175: Cleaning up "false-399839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-399839
--- PASS: TestNetworkPlugins/group/false (6.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (152.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-571339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0312 00:17:11.611465  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-571339 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m32.662456173s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (152.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-571339 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ba8e1ae-54a5-4fb8-ad79-038aa84303ac] Pending
helpers_test.go:344: "busybox" [0ba8e1ae-54a5-4fb8-ad79-038aa84303ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ba8e1ae-54a5-4fb8-ad79-038aa84303ac] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00393211s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-571339 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-820117 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-820117 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m17.739596033s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-571339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-571339 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.398478921s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-571339 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-571339 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-571339 --alsologtostderr -v=3: (13.600941233s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-571339 -n old-k8s-version-571339
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-571339 -n old-k8s-version-571339: exit status 7 (104.298064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-571339 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820117 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b478c735-5558-4740-b4c3-05f9de61ae0a] Pending
helpers_test.go:344: "busybox" [b478c735-5558-4740-b4c3-05f9de61ae0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b478c735-5558-4740-b4c3-05f9de61ae0a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004275747s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-820117 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-820117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-820117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.044608143s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-820117 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-820117 --alsologtostderr -v=3
E0312 00:20:41.351922  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-820117 --alsologtostderr -v=3: (12.11449347s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820117 -n no-preload-820117
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820117 -n no-preload-820117: exit status 7 (91.876174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-820117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-820117 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0312 00:22:11.611185  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-820117 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (4m48.991213721s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820117 -n no-preload-820117
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7xjkr" [0901a2de-a7b1-4e65-aea0-5a703ae44346] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005110105s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7dtbw" [9ef022ab-281b-4e44-8327-6d1b44c4a409] Running
E0312 00:25:41.351582  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004422534s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7xjkr" [0901a2de-a7b1-4e65-aea0-5a703ae44346] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005839204s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-820117 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7dtbw" [9ef022ab-281b-4e44-8327-6d1b44c4a409] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004606241s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-571339 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-571339 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-820117 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-571339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-571339 --alsologtostderr -v=1: (1.063448698s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-571339 -n old-k8s-version-571339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-571339 -n old-k8s-version-571339: exit status 2 (425.657009ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-571339 -n old-k8s-version-571339
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-571339 -n old-k8s-version-571339: exit status 2 (427.514306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-571339 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-571339 --alsologtostderr -v=1: (1.00929837s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-571339 -n old-k8s-version-571339
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-571339 -n old-k8s-version-571339
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-820117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-820117 --alsologtostderr -v=1: (1.055441271s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820117 -n no-preload-820117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820117 -n no-preload-820117: exit status 2 (426.308074ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-820117 -n no-preload-820117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-820117 -n no-preload-820117: exit status 2 (448.238354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-820117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-820117 --alsologtostderr -v=1: (1.133397041s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820117 -n no-preload-820117
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-820117 -n no-preload-820117
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m11.125918982s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-463893 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-463893 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m9.408429855s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463893 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7dc87ca-1925-4e43-87b1-6bfc57dc281e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0312 00:27:11.611776  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d7dc87ca-1925-4e43-87b1-6bfc57dc281e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00367732s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-463893 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239359 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3a76aa6c-309f-4f79-a5a5-44bbfce7b221] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3a76aa6c-309f-4f79-a5a5-44bbfce7b221] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004962961s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-239359 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-463893 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-463893 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.401544051s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-463893 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-239359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-239359 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.505046978s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-239359 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-463893 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-463893 --alsologtostderr -v=3: (12.092586694s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-239359 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-239359 --alsologtostderr -v=3: (12.130635941s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893: exit status 7 (80.368995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-463893 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-463893 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-463893 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m32.646921189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (273.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239359 -n embed-certs-239359
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239359 -n embed-certs-239359: exit status 7 (130.438531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-239359 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (303.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-239359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4
E0312 00:29:02.959041  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:02.964332  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:02.974655  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:02.994974  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:03.035284  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:03.115712  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:03.276016  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:03.596541  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:04.236755  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:05.517247  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:08.078375  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:13.199352  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:23.440078  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:29:43.920346  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:30:24.397351  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0312 00:30:24.880864  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:30:27.807632  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:27.812941  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:27.823219  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:27.843492  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:27.883847  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:27.964185  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:28.124619  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:28.445639  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:29.086061  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:30.366688  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:32.926838  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:38.047941  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:30:41.351441  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0312 00:30:48.288408  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:31:08.768568  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
E0312 00:31:46.801426  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:31:49.729537  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-239359 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m3.305841491s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-239359 -n embed-certs-239359
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (303.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cmwzt" [3b9208e8-5421-45b4-b3d1-37c02807c321] Running
E0312 00:32:11.611824  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00429568s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-cmwzt" [3b9208e8-5421-45b4-b3d1-37c02807c321] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003727205s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-463893 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-463893 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-463893 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893: exit status 2 (333.535675ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893: exit status 2 (334.377431ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-463893 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-463893 -n default-k8s-diff-port-463893
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-217939 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-217939 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (52.362085695s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wjjc8" [19c7748d-9930-43d3-8cf8-37bee49d897e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004972058s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-wjjc8" [19c7748d-9930-43d3-8cf8-37bee49d897e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004951213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-239359 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-239359 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-239359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-239359 --alsologtostderr -v=1: (1.046586014s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239359 -n embed-certs-239359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239359 -n embed-certs-239359: exit status 2 (556.332579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239359 -n embed-certs-239359
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239359 -n embed-certs-239359: exit status 2 (538.462666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-239359 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-239359 --alsologtostderr -v=1: (1.194834817s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-239359 -n embed-certs-239359
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-239359 -n embed-certs-239359
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0312 00:33:11.650694  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m12.833829376s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-217939 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-217939 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.664917576s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-217939 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-217939 --alsologtostderr -v=3: (1.357350598s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-217939 -n newest-cni-217939
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-217939 -n newest-cni-217939: exit status 7 (140.889188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-217939 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-217939 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-217939 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (21.566885634s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-217939 -n newest-cni-217939
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-217939 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-217939 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-217939 -n newest-cni-217939
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-217939 -n newest-cni-217939: exit status 2 (404.358319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-217939 -n newest-cni-217939
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-217939 -n newest-cni-217939: exit status 2 (354.212182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-217939 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-217939 -n newest-cni-217939
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-217939 -n newest-cni-217939
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.35s)
E0312 00:39:02.958679  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
E0312 00:39:11.779132  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:11.784409  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:11.794642  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:11.815086  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:11.855502  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:11.936830  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:12.097799  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:12.418908  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:13.059410  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:14.339687  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:16.900744  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:22.021779  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:32.262025  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0312 00:34:02.958622  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/old-k8s-version-571339/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m1.341090602s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-87jxb" [e9d7efe7-7ab5-44bf-bda1-30b789792ecd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-87jxb" [e9d7efe7-7ab5-44bf-bda1-30b789792ecd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003886308s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m23.183338016s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-x9qlv" [6e7e9bdb-5353-4d69-bce5-41f4681bd051] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00393098s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hgvwq" [a9c31f82-aa38-49b6-aeed-dd8c1208612b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hgvwq" [a9c31f82-aa38-49b6-aeed-dd8c1208612b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004688733s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0312 00:35:41.351574  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/addons-340965/client.crt: no such file or directory
E0312 00:35:55.491811  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/no-preload-820117/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m3.204509592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nx2pk" [5697fdcf-2860-439a-864e-28eab91dba1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005496953s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cz7jt" [d8eac8bc-ce6a-41e9-8dfd-bdcc554e5d02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cz7jt" [d8eac8bc-ce6a-41e9-8dfd-bdcc554e5d02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003970039s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6svch" [422db879-4e21-4eb1-85e5-304aec53baf1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6svch" [422db879-4e21-4eb1-85e5-304aec53baf1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00451191s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0312 00:36:54.659163  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/functional-270400/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m31.145084434s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0312 00:37:15.850673  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
E0312 00:37:20.970870  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
E0312 00:37:31.211923  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
E0312 00:37:51.692852  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m3.992319951s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-2579x" [360d4ba2-da9c-4b09-8598-41513b3e8ea6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004768961s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x8hh4" [670f3ff7-9412-4ffe-af51-77d850eabc02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x8hh4" [670f3ff7-9412-4ffe-af51-77d850eabc02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.015088713s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gxfzd" [c3d8cb10-8262-43dc-bc09-b1414576c8d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gxfzd" [c3d8cb10-8262-43dc-bc09-b1414576c8d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003959223s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0312 00:38:32.653137  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (48.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-399839 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (48.715662816s)
--- PASS: TestNetworkPlugins/group/bridge/Start (48.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-399839 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-399839 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wll4c" [36760a90-949f-44c1-893e-7131798fa7ed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0312 00:39:49.958474  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:49.964146  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:49.974816  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:49.995175  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:50.035469  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:50.116095  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:50.276741  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:50.597454  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:51.238379  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-wll4c" [36760a90-949f-44c1-893e-7131798fa7ed] Running
E0312 00:39:52.518624  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
E0312 00:39:52.743023  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/auto-399839/client.crt: no such file or directory
E0312 00:39:54.574050  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/default-k8s-diff-port-463893/client.crt: no such file or directory
E0312 00:39:55.078880  987686 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kindnet-399839/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003674069s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-399839 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-399839 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/335)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-098686 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-098686" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-098686
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-335624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-335624
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-399839 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18358-982285/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Mar 2024 00:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-422392
contexts:
- context:
cluster: kubernetes-upgrade-422392
extensions:
- extension:
last-update: Tue, 12 Mar 2024 00:14:51 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: kubernetes-upgrade-422392
name: kubernetes-upgrade-422392
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-422392
user:
client-certificate: /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kubernetes-upgrade-422392/client.crt
client-key: /home/jenkins/minikube-integration/18358-982285/.minikube/profiles/kubernetes-upgrade-422392/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-399839

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-399839"

                                                
                                                
----------------------- debugLogs end: kubenet-399839 [took: 5.820647047s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-399839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-399839
--- SKIP: TestNetworkPlugins/group/kubenet (6.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-399839 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-399839" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-399839

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-399839" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-399839"

                                                
                                                
----------------------- debugLogs end: cilium-399839 [took: 6.261995474s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-399839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-399839
--- SKIP: TestNetworkPlugins/group/cilium (6.50s)

                                                
                                    
Copied to clipboard